Back to: PSA: Please don't nominate the Laundry Files for a Best Series Hugo Award (this year) | Forward to: New Book Week!

Dude, you broke the future!

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)




Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.




Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:

a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.

—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

What do AIs want?

What do our current, actually-existing AI overlords want?

Elon Musk—who I believe you have all heard of—has an obsessive fear of one particular hazard of artificial intelligence—which he conceives of as being a piece of software that functions like a brain-in-a-box)—namely, the paperclip maximizer. A paperclip maximizer is a term of art for a goal-seeking AI that has a single priority, for example maximizing the number of paperclips in the universe. The paperclip maximizer is able to improve itself in pursuit of that goal but has no ability to vary its goal, so it will ultimately attempt to convert all the metallic elements in the solar system into paperclips, even if this is obviously detrimental to the wellbeing of the humans who designed it.

Unfortunately, Musk isn't paying enough attention. Consider his own companies. Tesla is a battery maximizer—an electric car is a battery with wheels and seats. SpaceX is an orbital payload maximizer, driving down the cost of space launches in order to encourage more sales for the service it provides. Solar City is a photovoltaic panel maximizer. And so on. All three of Musk's very own slow AIs are based on an architecture that is designed to maximize return on shareholder investment, even if by doing so they cook the planet the shareholders have to live on. (But if you're Elon Musk, that's okay: you plan to retire on Mars.)

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don't make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it's as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals—notably maximizing revenue—as a side-effect of the pursuit of their overt goal. But sometimes they try instead to manipulate the regulatory environment they operate in, to ensure that money flows towards them regardless.

Human tool-making culture has become increasingly complicated over time. New technologies always come with an implicit political agenda that seeks to extend its use, governments react by legislating to control the technologies, and sometimes we end up with industries indulging in legal duels.

For example, consider the automobile. You can't have mass automobile transport without gas stations and fuel distribution pipelines. These in turn require access to whoever owns the land the oil is extracted from—and before you know it, you end up with a permanent occupation force in Iraq and a client dictatorship in Saudi Arabia. Closer to home, automobiles imply jaywalking laws and drink-driving laws. They affect town planning regulations and encourage suburban sprawl, the construction of human infrastructure on the scale required by automobiles, not pedestrians. This in turn is bad for competing transport technologies like buses or trams (which work best in cities with a high population density).

To get these laws in place, providing an environment conducive to doing business, corporations spend money on political lobbyists—and, when they can get away with it, on bribes. Bribery need not be blatant, of course. For example, the reforms of the British railway network in the 1960s dismembered many branch services and coincided with a surge in road building and automobile sales. These reforms were orchestrated by Transport Minister Ernest Marples, who was purely a politician. However, Marples accumulated a considerable personal fortune during this time by owning shares in a motorway construction corporation. (So, no conflict of interest there!)

The automobile industry in isolation isn't a pure paperclip maximizer. But if you look at it in conjunction with the fossil fuel industries, the road-construction industry, the accident insurance industry, and so on, you begin to see the outline of a paperclip maximizing ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year—that's the global death toll from automobile accidents according to the world health organization: it rivals the first world war on an ongoing basis—as side-effects of its drive to sell you a new car.

Automobiles are not, of course, a total liability. Today's cars are regulated stringently for safety and, in theory, to reduce toxic emissions: they're fast, efficient, and comfortable. We can thank legally mandated regulations for this, of course. Go back to the 1970s and cars didn't have crumple zones. Go back to the 1950s and cars didn't come with seat belts as standard. In the 1930s, indicators—turn signals—and brakes on all four wheels were optional, and your best hope of surviving a 50km/h crash was to be thrown clear of the car and land somewhere without breaking your neck. Regulatory agencies are our current political systems' tool of choice for preventing paperclip maximizers from running amok. But unfortunately they don't always work.

One failure mode that you should be aware of is regulatory capture, where regulatory bodies are captured by the industries they control. Ajit Pai, head of the American Federal Communications Commission who just voted to eliminate net neutrality rules, has worked as Associate General Counsel for Verizon Communications Inc, the largest current descendant of the Bell telephone system monopoly. Why should someone with a transparent interest in a technology corporation end up in charge of a regulator for the industry that corporation operates within? Well, if you're going to regulate a highly complex technology, you need to recruit your regulators from among those people who understand it. And unfortunately most of those people are industry insiders. Ajit Pai is clearly very much aware of how Verizon is regulated, and wants to do something about it—just not necessarily in the public interest. When regulators end up staffed by people drawn from the industries they are supposed to control, they frequently end up working with their former officemates to make it easier to turn a profit, either by raising barriers to keep new insurgent companies out, or by dismantling safeguards that protect the public.

Another failure mode is regulatory lag, when a technology advances so rapidly that regulations are laughably obsolete by the time they're issued. Consider the EU directive requiring cookie notices on websites, to caution users that their activities were tracked and their privacy might be violated. This would have been a good idea, had it shown up in 1993 or 1996, but unfortunately it didn't show up until 2011, by which time the web was vastly more complex. Fingerprinting and tracking mechanisms that had nothing to do with cookies were already widespread by then. Tim Berners-Lee observed in 1995 that five years' worth of change was happening on the web for every twelve months of real-world time; by that yardstick, the cookie law came out nearly a century too late to do any good.

Again, look at Uber. This month the European Court of Justice ruled that Uber is a taxi service, not just a web app. This is arguably correct; the problem is, Uber has spread globally since it was founded eight years ago, subsidizing its drivers to put competing private hire firms out of business. Whether this is a net good for society is arguable; the problem is, a taxi driver can get awfully hungry if she has to wait eight years for a court ruling against a predator intent on disrupting her life.

So, to recap: firstly, we already have paperclip maximizers (and Musk's AI alarmism is curiously mirror-blind). Secondly, we have mechanisms for keeping them in check, but they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with. Finally, unlike the naive vision of a paperclip maximizer, existing AIs have multiple agendas—their overt goal, but also profit-seeking, and expansion into new areas, and to accomodate the desires of whoever is currently in the driver's seat.

How it all went wrong

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. Everywhere I look I see voters protesting angrily against an entrenched establishment that seems determined to ignore the wants and needs of their human voters in favour of the machines. The Brexit upset was largely the result of a protest vote against the British political establishment; the election of Donald Trump likewise, with a side-order of racism on top. Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

Now, this is CCC, and we're all more interested in computers and communications technology than this historical crap. But as I said earlier, history is a secret weapon if you know how to use it. What history is good for is enabling us to spot recurring patterns in human behaviour that repeat across time scales outside our personal experience—decades or centuries apart. If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

(Note: Cory Doctorow has a contrarian thesis: The dotcom boom was also an economic bubble because the dotcoms came of age at a tipping point in financial deregulation, the point at which the Reagan-Clinton-Bush reforms that took the Depression-era brakes off financialization were really picking up steam. That meant that the tech industry's heady pace of development was the first testbed for treating corporate growth as the greatest virtue, built on the lie of the fiduciary duty to increase profit above all other considerations. I think he's entirely right about this, but it's a bit of a chicken-and-egg argument: we wouldn't have had a commercial web in the first place without a permissive, deregulated financial environment. My memory of working in the dot-com 1.0 bubble is that, outside of a couple of specific environments (the Silicon Valley area and the Boston-Cambridge corridor) venture capital was hard to find until late 1998 or thereabouts: the bubble's initial inflation was demand-driven rather than capital-driven, as the non-tech investment sector was late to the party. Caveat: I didn't win the lottery, so what do I know?)

The ad-supported web that we live with today wasn't inevitable. If you recall the web as it was in 1994, there were very few ads at all, and not much in the way of commerce. (What ads there were were mostly spam, on usenet and via email.) 1995 was the year the world wide web really came to public attention in the anglophone world and consumer-facing websites began to appear. Nobody really knew how this thing was going to be paid for (the original dot com bubble was all largely about working out how to monetize the web for the first time, and a lot of people lost their shirts in the process). And the naive initial assumption was that the transaction cost of setting up a TCP/IP connection over modem was too high to be supported by per-use microbilling, so we would bill customers indirectly, by shoving advertising banners in front of their eyes and hoping they'd click through and buy something.

Unfortunately, advertising is an industry. Which is to say, it's the product of one of those old-fashioned very slow AIs I've been talking about. Advertising tries to maximize its hold on the attention of the minds behind each human eyeball: the coupling of advertising with web search was an inevitable outgrowth. (How better to attract the attention of reluctant subjects than to find out what they're really interested in seeing, and sell ads that relate to those interests?)

The problem with applying the paperclip maximizer approach to monopolizing eyeballs, however, is that eyeballs are a scarce resource. There are only 168 hours in every week in which I can gaze at banner ads. Moreover, most ads are irrelevant to my interests and it doesn't matter how often you flash an ad for dog biscuits at me, I'm never going to buy any. (I'm a cat person.) To make best revenue-generating use of our eyeballs, it is necessary for the ad industry to learn who we are and what interests us, and to target us increasingly minutely in hope of hooking us with stuff we're attracted to.

At this point in a talk I'd usually go into an impassioned rant about the hideous corruption and evil of Facebook, but I'm guessing you've heard it all before so I won't bother. The too-long-didn't-read summary is, Facebook is as much a search engine as Google or Amazon. Facebook searches are optimized for Faces, that is, for human beings. If you want to find someone you fell out of touch with thirty years ago, Facebook probably knows where they live, what their favourite colour is, what size shoes they wear, and what they said about you to your friends all those years ago that made you cut them off.

Even if you don't have a Facebook account, Facebook has a You account—a hole in their social graph with a bunch of connections pointing into it and your name tagged on your friends' photographs. They know a lot about you, and they sell access to their social graph to advertisers who then target you, even if you don't think you use Facebook. Indeed, there's barely any point in not using Facebook these days: they're the social media Borg, resistance is futile.

However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.

This brings me to another interesting point about computerized AI, as opposed to corporatized AI: AI algorithms tend to embody the prejudices and beliefs of the programmers. A couple of years ago I ran across an account of a webcam developed by mostly-pale-skinned silicon valley engineers that have difficulty focusing or achieving correct colour balance when pointing at dark-skinned faces. That's an example of human-programmer-induced bias. But with today's deep learning, bias can creep in via the data sets the neural networks are trained on. Microsoft's first foray into a conversational chatbot driven by machine learning, Tay, was yanked offline within days because when 4chan and Reddit based trolls discovered they could train it towards racism and sexism for shits and giggles.

Humans may be biased, but at least we're accountable and if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

AI-based systems that concretize existing prejudices and social outlooks make it harder for activists like us to achieve social change. Traditional advertising works by playing on the target customer's insecurity and fear as much as on their aspirations, which in turn play on the target's relationship with their surrounding cultural matrix. Fear of loss of social status and privilege is a powerful stimulus, and fear and xenophobia are useful tools for attracting eyeballs.

What happens when we get pervasive social networks with learned biases against, say, feminism or Islam or melanin? Or deep learning systems trained on data sets contaminated by racist dipshits? Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts?

I think you already know the answer to that.

Look to the future (it's bleak!)

Now, if this is sounding a bit bleak and unpleasant, you'd be right. I write sci-fi, you read or watch or play sci-fi; we're acculturated to think of science and technology as good things, that make our lives better.

But plenty of technologies have, historically, been heavily regulated or even criminalized for good reason, and once you get past the reflexive indignation at any criticism of technology and progress, you might agree that it is reasonable to ban individuals from owning nuclear weapons or nerve gas. Less obviously: they may not be weapons, but we've banned chlorofluorocarbon refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UV-B radiation. And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave.

Nerve gas and leaded gasoline were 1930s technologies, promoted by 1930s corporations. Halogenated refrigerants and nuclear weapons are totally 1940s, and intercontinental ballistic missiles date to the 1950s. I submit that the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs have broken down.

Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn't an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

(Note that I do not have a solution to the regulatory problems I highlighted earlier, in the context of AI. This essay is polemical, intended to highlight the existence of a problem and spark a discussion, rather than a canned solution. After all, if the problem was easy to solve it wouldn't be a problem, would it?)

Firstly, Political hacking tools: social graph-directed propaganda

Topping my list of dangerous technologies that need to be regulated, this is low-hanging fruit after the electoral surprises of 2016. Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to indentify voters' political affiliations. They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues. The tools developed by web advertisers to sell products have now been weaponized for political purposes, and the amount of personal information about our affiliations that we expose on social media makes us vulnerable. Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyberwar attack via weaponized social media, as was the most recent French presidential election.

I'm biting my tongue and trying not to take sides here: I have my own political affiliation, after all. But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls. And this won't simply be billionaires like the Koch brothers and Robert Mercer in the United States throwing elections to whoever will hand them the biggest tax cuts. Russian military cyberwar doctrine calls for the use of social media to confuse and disable perceived enemies, in addition to the increasingly familiar use of zero-day exploits for espionage via spear phishing and distributed denial of service attacks on infrastructure (which are practiced by western agencies as well). Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned.

(By the way, I really hate the cyber- prefix; it usually indicates that the user has no idea what they're talking about. Unfortunately the term 'cyberwar' seems to have stuck. But I digress.)

Secondly, an adjunct to deep learning targeted propaganda is the use of neural network generated false video media.

We're used to Photoshopped images these days, but faking video and audio is still labour-intensive, right? Unfortunately, that's a nope: we're seeing first generation AI-assisted video porn, in which the faces of film stars are mapped onto those of other people in a video clip using software rather than a laborious human process. (Yes, of course porn is the first application: Rule 34 of the Internet applies.) Meanwhile, we have WaveNet, a system for generating realistic-sounding speech in the voice of a human speaker the neural network has been trained to mimic. This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it'll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don't like doing something horrible.

We're already seeing alarm over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels and adding their own advertising and keywords. Many of these channels are shaped by paperclip-maximizer advertising AIs that are simply trying to maximize their search ranking on YouTube. Add neural network driven tools for inserting Character A into Video B to click-maximizing bots and things are going to get very weird (and nasty). And they're only going to get weirder when these tools are deployed for political gain.

We tend to evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us—and we're already too vulnerable to fake news as it is. Soon they'll come for us, armed with believable video evidence. The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Paperclip maximizers that focus on eyeballs are so 20th century. Advertising as an industry can only exist because of a quirk of our nervous system—that we are susceptible to addiction. Be it tobacco, gambling, or heroin, we recognize addictive behaviour when we see it. Or do we? It turns out that the human brain's reward feedback loops are relatively easy to game. Large corporations such as Zynga (Farmville) exist solely because of it; free-to-use social media platforms like Facebook and Twitter are dominant precisely because they are structured to reward frequent interaction and to generate emotional responses (not necessarily positive emotions—anger and hatred are just as good when it comes to directing eyeballs towards advertisers). "Smartphone addiction" is a side-effect of advertising as a revenue model: frequent short bursts of interaction keep us coming back for more.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. Dopamine Labs is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable. It goes a bit beyond automated A/B testing; A/B testing allows developers to plot a binary tree path between options, but true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn't a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Let me give you a more specific scenario.

Apple have put a lot of effort into making realtime face recognition work with the iPhone X. You can't fool an iPhone X with a photo or even a simple mask: it does depth mapping to ensure your eyes are in the right place (and can tell whether they're open or closed) and recognize your face from underlying bone structure through makeup and bruises. It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI, and it can see where your eyeballs are pointing. The purpose of this is to make it difficult for a phone thief to get anywhere if they steal your device. but it means your phone can monitor your facial expressions and correlate it against app usage. Your phone will be aware of precisely what you like to look at on its screen. With addiction-seeking deep learning and neural-network generated images, it is in principle possible to feed you an endlessly escallating payload of arousal-maximizing inputs. It might be Facebook or Twitter messages optimized to produce outrage, or it could be porn generated by AI to appeal to kinks you aren't even consciously aware of. But either way, the app now owns your central nervous system—and you will be monetized.

Finally, I'd like to raise a really hair-raising spectre that goes well beyond the use of deep learning and targeted propaganda in cyberwar.

Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls around Me. (Spoiler: Apple pulled it like a hot potato when word got out.) The app works out where the user is using GPS, then queried FourSquare and Facebook for people matching a simple relational search—for single females (per Facebook) who have checked in (or been checked in by their friends) in your vicinity (via FourSquare). The app then displayed their locations on a map, along with links to their social media profiles.

If they were doing it today the interface would be gamified, showing strike rates and a leaderboard and flagging targets who succumbed to harassment as easy lays. But these days the cool kids and single adults are all using dating apps with a missing vowel in the name: only a creeper would want something like "Girls around Me", right?

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don't worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

Imagine you're young, female, and a supermarket has figured out you're pregnant by analysing the pattern of your recent purchases, like Target back in 2012.

Now imagine that all the anti-abortion campaigners in your town have an app called "babies at risk" on their phones. Someone has paid for the analytics feed from the supermarket and the result is that every time you go near a family planning clinic a group of unfriendly anti-abortion protesters engulfs you.

Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.) Or imagine you're in Pakistan and Christian/Muslim tensions are mounting, or you're in rural Alabama, or ... the possibilities are endless

Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.

And the worst bit of this picture?

Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, jews, or muslims; it's just a paperclip maximizer doing what it does—and you are the paper.

784 Comments

1:

Not to detract too much from your broader (terrifying) point, but does Cambridge Analytica actually match up to its own hype? I was under the impression that it was actually a ramshackle scam that was largely ignored by the campaigns it claimed it worked for (http://highline.huffingtonpost.com/articles/en/mercers/).

2:

does Cambridge Analytica actually match up to its own hype?

I'm not sure that matters — even if they don't, the real thing will be along soon enough.

It's fairly obvious that a lot of electoral meddling via social media took place in 2016/17, much of it automated and relying on mobilized bot armies; it's a fair bet that DARPA and/or the NSA black budget will now be funneling on the order of billions of dollars into (a) making the tech work reliably, and (b) figuring out how to defend against it. It's an early example of what Vernor Vinge named a "DWIW" weapon in "Rainbow's End" ("Do What I Want" AI).

3:

The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone? It feels like a lot of these problems can be - if not solved - at least lessened by more control over what information you 'transmit' out.

As you point out, facebook can fill in the holes in its victim-graph using information from your neighbours. Perhaps we just need to force them to have ridiculously fine-grained permissions on data usage, so that they need permissions from each victim for each use of their (say) date of birth.

If there is one thing that gums up a functioning slow-AI, it's a stultifying byzantine bureaucracy!

4:

Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned. Nothing really new here "just" speeded-up & refined. Josef Goebbels or "Saint" Dominic would recognise the methods, immediately!

2 Won't the shit really hit the fan, though if/when Russian ( or whoever ) meddling is proven in eith the US &/or Brexit results?

There are mutterings starting here about the latter, the Brits being just a tad more cynical than the US electorate ( I hope )

Someone else, a philosophical economist saw some of this a long time ago, but his name has been much-taken-in-vain by supposed "followers" of his, who all too plainly, haven't understood a single thing he said: Adam Smith

^^^^^^^^^^^^^^^^^

But you & we are right to be scared. How do we & you ensure that this vital message gets spread around, since the more people understand your message, the higher the chance of preventing this nasty future - suggestions?

5:

So, who cribbed from whom? This looks an awful lot like Ted Chiang's: https://www.buzzfeed.com/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway

6:

I ran across Ted's essay while I was halfway through writing this talk; we think on convergent paths.

I've been writing about this since 2010, though.

7:

Good enough :-) I do subscribe to the theory that ideas want to be born, and will find multiple channels to do so!

8:

Went off the deep end a little bit at the end there didn't you Chuck? Was totally on board with you right up until the last sentence pretty much.... Maybe I missed something, but what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups?

sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?

9:

Making it so that you can use the internet from your phone and at the same time remain hidden is not trivial. Even if you do turn everything off so that nothing is explicitly shared, it's still possible to geolocate your phone remotely if you use it for internet access.

This happens because the phone needs to have an IP address to access the internet. These are not random, and are in most cases handed out by somebody else and not hard-coded to the phone. From the IP address (that is the series of four numbers separated by dots) it's not that hard to get at least some location data. Every time you access a web site, the site gets your IP address.

If you're on cellular data, it's probably locatable at least to which country you are in, because the IP address your phone gets is in a block allocated to that country. It might depend on the configurations of the cellular provider what kind of IP address they show "outside" of their systems, but I think it's still more or less bound to a country.

If you're on some wifi network, the IP address you get from there can usually be located quite easily, if wanted. The IP address might be NATted (so that the real internet "sees" only a single address for the whole wifi), but then its location can be deduced from other data, especially if it's a public one - then the other mobile users could provide their GPS data to the server and that could be connected to your device if it's on the same wifi.

Of course one can try to go around this IP address system by using Tor, but it's not that easy on a phone, and would probably have an effect on the battery life. This is because it uses encryption and is somewhat resource intensive.

10:

"what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups" Arms manufacturers, prison owners, spyware manufacturers, and that's just the obvious candidates who directly profit from violence.

Once someone develops a perfect gaydar app (for example, and assuming that such a thing is possible), even if they only ever intended it for peaceful purposes, it will end up being used for evil (possibly someone else will clone the idea etc.), that's just what people are like.

11:

sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?

Corporations as they currently exist ain't interested in massacring their customers. But we have any number of corporations who currently sell tools that are used to murder foreigners in bulk.

(Also, we seem to be heading towards virtual corporations — I can't begin to describe how bad I think this idea is — qua AIs, and who the fuck knows what their motives will be? Or what the scale factor for a minimum viable product will be in this problem domain? It's one thing if it takes 10,000 workers to release a genocide app, and another thing entirely if three hackers can barf one up in a weekend-of-code session. It's like the common denominator of a Eurofighter Typhoon II and a quadrotor drone carrying half a kilo of semtex knocked up in a back room by ISIS engineers; one of them is vastly more powerful and sophisticated and expensive, but if you're an unprotected human body, in range, and the target, either one can kill you just as dead.)

12:

IIRC, Egypt, the UAE and Saudi police have already been using Grindr to sting gay men in their jurisdictions — cops fake up a profile and arrest anyone who turns up for a date.

Note that in at least one of those states being gay is a capital offense. (Not sure about Egypt and UAE, but it's still a serious felony there.)

13:

I'd like to direct your attention to a very scary detail in the US:

Right now there are strong forces hell-bent on getting corporations the constitutional right to refuse to provide certain kinds of health-care to their employees, because it would violate the corporations (owners) religious beliefs.

I put "owners" in paranthesis, because it is painfully obvious that the very moment they crack that bit open, they will argue that you cant distinguish between a company owned by one religious family, and one owned by 100 religious shareholders.

If corporations win this right, then we're headed straight into "First they came for the gays, [...] colored, [...] irish, ..." territory.

The case currently awaiting smoke from SCOTUS, about a bakery and gay marriages, could settle this either way, either going "sure, a bakery can discriminate if they care to wave their cross" or "if you engage in commerce, you cannot discriminate, no matter what you believe" or somewhere in the middle.

Once somebody gain some human rights, they usually get the rest too, and corporations in USA already have gained the right to influence politics, and religion seems next.

At the end of that slippery slope lies armed corporations who can kill in self-defence.

14:

I thought governments came before other corporations.

"What is the heart but a spring?"---Thomas Hobbes in "Leviathan"

The problem with comparing corporations to AIs is that AIs are beings but not human whereas corporations are human (at least on this planet) but not beings.

15:

I am wondering how the slow AI Corporations are subject to evolution:

They seem to have a fairly short lifetime, and can certainly evolve their processes and methods and copy successful methods from other corporations, and in general their ability to innovate and hence survive seems to be inversely correlated with their age. Presumably the increasingly faster rate of change in markets leads to a faster corporate mutation rate and a shorter individual corporate lifespan and increasing rates of growth for the new disruptors.

Does this lead to a small number of short-lived corporate giants or an explosion of small short-lived corporates?

16:

sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?

Good thing there's no signs of anti-PC intolerance and that tech isn't unrepresentative of the general population, then(!) ;o)

17:

There's a whole terrifying pile of ultrasonic and audio comms software being deployed lately for everything from autoconfiguration (Chromecast) to fingerprinting to media identification. As basically no platform has mute-by-default for all apps and web content it won't matter if you have location permissions blocked on your phone. The nazi flashmob will be able to hear your web ads singing if they get close.

18:

I am wondering how the slow AI Corporations are subject to evolution

Like all questions in Biology, the answer is "it depends..."

Corporations have some characteristics of biological organisms, but not all of them:

  • They divide, but not endlessly (unlike bacteria)
  • They swap information (like bacteria) but can also patent it
  • They grow but can also shrink
  • They die, but can also be 'resurrected' (bankruptcy?)
  • They have analogous parts to cells (employees) but these are both totipotent (in theory) and can swap between corporates

When it comes to ecosystems of corporations, the business sector plays a role, just as the environment of a biological (reef, forest, undersea volcano...). Obviously, things change much faster in I.T. than in Japanese hospitality (http://www.slate.com/articles/business/continuously_operating/2014/10/world_s_oldest_companies_why_are_so_many_of_them_in_japan.html) and some sectors allow for much larger companies than others.

You can make all sorts of fanciful analogies - the dotcom bubble was like the Cambrian explosion!! - but I think the only way to get any useful answers would be some kind of simulation of corporate ecosystems. Could make an interesting game.

19:

Religious leaders - always - never forget them ....

20:

Appears UK & US law are heading in different directions here - I'm very glad to say.

21:

OK I'm going to cheat & quote a C_Stross tweet ( And hope he doesn't mind )

The Lipson-Shiu Corporate Type Test: http://www.andrewlipson.com/lstest.html

(My personality type is SCIE. How about you?) I was horrified to come out as ICIE - I think I'll take it again & see what I get... [ pause ] Almost as bad: ILUE

22:

What you're missing is that these corporations have gobs of data for sale. If a big church decides it is their holy mission to provide the names of Gay people to their "conversion squads" that church can just buy a bunch of data, through cut-outs if necessary, and search for:

1.) Men who live in San Francisco/West Hollywood 2.) Have never been married 3.) Are over 35 4.) Post on Grindr

It's not the church's fault if some of the people on the "conversion squads" take their work a little too seriously, or sometimes share data with groups which are a little more hardcore. "We do post guidelines when we put the data online, but some people just don't follow the rules."

(At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.)

23:

"Maybe I missed something, but what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups?"

There is a huge market for inciting violence -- the Daily Mail, Fox TV, Alex Jones, and so on all, to a greater or lesser extent, make their money from it. Every newspaper that has ever called for a war in its headlines (which is most of them) is a "corporation which] wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups".

24:

I thought governments came before other corporations.

Governments — those that predate corporations (and, loosely, the 30 years war and the Treaty of Westphalia that established the requirements of the modern nation-state) don't structurally resemble corporations: they were almost invariably some species of despotic monarchy. (Which might be seen as a family-owned business, but that's stretching the metaphor to breaking point.)

25:

Not all governments were despotic monarchies, even in the ancient world. The Roman Republic is a clear counterexample, as the name suggests (res publica, the thing that belongs to the people), and a strikingly successful one for a long time. Athens from Solon on was a less stable and successful example, but still one that had a substantial impact for a while. Of course they also both made a point of contrasting themselves with "Oriental despotism," exaggerating the differences for propaganda, but there were real differences.

But I think there's an older example still of AI, and one that in fact you point to in your own address: Gods and religions. The god is the human-recognizable symbol for an entity that outlives individual human beings, that infects human brains and uses them to propagate itself (remember that the origin of the word "propaganda" was the Latin phrase meaning "propagation of the faith"), and that has powers that individual human beings lack. I think it makes sense to view it as an ideational parasite. And such entities have characteristic features that you yourself point to in your discussion of transhumanism. Even ideational systems that start out by immunizing people against gods are at risk for evolving into religions over time; look at what happened with Buddhism, which started out virtually as an anti-religion.

26:

"Nobody in 2007 was expecting a Nazi revival in 2017, right?"

You mean you didn't? Seriously. The signs were all there well by 1997, let alone 2007 :-( The two reasons that most people don't expect easily predictable effects are that (a) they look at the trends and not the underlying facts and (b) they choose to believe the outcome that most matches their worldview.

And w.r.t. your last paragraph, I have been saying for decades "You aren't the customer - you are the commodity." The customers are the organisations they sell their services (and your data) to.

Sorry, but I am afraid you are being too optimistic!

27:

Great essay!

Have wondered whether some 'AI progress' might be slowed down by requiring that such AI be forced to juggle multiple demands/goals. Plus, since multi-tasking is supposed to be such a good thing for human employees to do, may as well get the AIs in on the fun.

What happens when AIs compete against AIs? We should be seeing this already. Such competition/evolution could be hastened by advertisers tightening ad budgets.

Gov't run AI - apart from non-democratic countries, no mention. Curious. Maybe the military will work on this because ever since the science budgets got slashed, they're the only ones with discretionary research spending. Motivation/rationale: the way things are looking climate-wise, the military could use help in optimizing resources for disaster-relief operations. Seriously - there's lots of real-world potential here for beneficial AI including getting rid of a few layers of elected 'Gov't' reps.

Reaction - After CorpA gets 100% of all eyeballs, what do you do if you're CorpB? I'm guessing CorpB won't just fold - so what's the likeliest retaliatory strategy? All it takes is one Hindenburg for an entire new industry to fold.

28:

Just a note:

Scientific names consist of a genus, which is capitalized, followed by a specific epithet, which is lower case, and an authority, so that you know whose species concept is referred to. The genus and specific epithet are typically written in italics or underlines (the latter is mostly used on hand-written specimen labels, because most people don't have good italic handwriting).

So "Homo sapiens sapiens" is properly Homo sapiens ssp. sapiens L. or informally, Homo sapiens sapiens or even Homo s. sapiens. The last is acceptable because the we're the nominotypical subspecies, unlike, say, Neanderthals. The L. is in honor of Linnaeus, who first proposed the species concept we now use for ourselves. Early biologists have their names abbreviated, and no accepted scientific name is older than Linnaeus' publications.

Why care? Well, look at the role of evolution in the arguments of the religious right. They ignore the fact that the first page of the Bible says that the Moon only comes up during the night (something that, if they glanced skyward during about half the month, they'd know was false), accept astronomy, but denigrate evolution because it says that Man was created separately from animals and they can't stand the theory that all eukaryotic species descend from a single ancestor.

A lot of people go along with this scorn of biology without thinking about it, despite the fact that the theory of evolution (now in version 3, Evo Devo) is far more successful than, say, general relativity and quantum mechanics (where we're still arguing whether time exists and whether 96% of the observable universe exists, and if so, what it is). Prejudice is not inversely linked to success, sadly.

Basically, if you want to show support for good science, italicize scientific names and only capitalize the genus. I know that's a bit more typing, but it's an easy way to show you understand and you care.

29:

Tay wasn't Microsoft's first conversational chatbot; Xiaioce was - she's huge on Weibo and WeChat, she got a job presenting the weather on Chinese TV in 2015 and had a book of poetry come out this year. The difference isn't that Xiaoice was any better at filtering (she got those protections about 24 hours after Tay got corrupted); it's just that online culture in China didn't include trolling her the way western online culture optimised for trolling Tay. So I'm wondering what the implications of that are for 'slow AI' in China, where you have that interesting mercantilist government participation in business alongside emerging startup culture with brands like Baidu?

30:

Oh, come off it! The fact that the theory of evolution is so solid does NOT mean that the exact definition of species is, let alone the detailed taxonomy. Mammals are relatively simple, but the Linnaean model does NOT match reality even for them/us (because evolution and inter-fertility are continuous and probabilistic, not discrete and deterministic), so there is inevitably a lot of personal judgement and disagreement. Aside: that's also true of the theory of general relativity versus the exact formula (despite the claims of the black hole divers).

And demanding a particular font in a blog such as this is just plain ridiculous. There are a zillion such conventions in mathematics and other sciences, and it is insane to expect them to be used.

Yes, one can reasonably say that we, Neanderthals and Denisovans are all the same species and Homo sapiens is c. 1,000,000 years old - but it is equally reasonable to say that we are different ones, and HS is only c. 300,000 years old. And variations on those ....

31:

The only quibble I have is that we're talking about AI 3.0 at least, possibly 4.0.

Basically, humans have been joining into groups and using a variety of artificial techniques to augment memories for tens of thousands of years (this is the whole Lynne Kelly thing I went off on a few months ago). This wasn't just ancient superstitious polytheism. For one thing, it's not clear if the old "spirits" were at all the same thing as the Judeo-Christian God. In any case, a tribe with many spirits also had a normal mechanism against subversion: nobody normally had a monopoly on all the knowledge the group collectively needed to survive, so by controlling access to knowledge, there was a check on someone taking authoritarian power.

This was (is) AI 1.0. Only remnants of the ancient, pre-literate originals survive, on reservations in Australia, the Americas, and parts of Russia.

AI 1.0 systems survived the transition to literacy, and they reached their heights with the Roman and other classical empires. Much of their architecture was designed to transmit, preserve, and enforce the memes that bound them together. You can see remnants of this system in North Korea, for example

Normally, when two AIs of 1.0 fought, the stronger subsumed the weaker, either by destroying it's "gods" (which really means attacking all the technology designed to pass on information between generations), or subsuming those gods into a bigger system (as in Rome and China).

AI 2.0 might have started with the Jews refusing to be assimilated, changing their concept of god to God and religion, and continuing on. This fed into the whole "Religions of the book" thing. Are these AI 2.0? Hard to tell, because there's sort of an evolution rather than a revolution.

AI 2.0 definitely showed up when the printing press started making it much easier to store and transmit huge amounts of information. This was about the time that corporations showed up, incidentally.

What we're seeing now is that humans are no longer the only information processing system, and so existing AIs (corporations, nation-states, etc.) are increasingly becoming symbiotic structures composed of multiple systems for processing information and integrating human behavior in groups. We'll see how well it all works out.

32:

Great talk. I just have a few quibbles

  • In the US, the average age of cars was 11.6 years in 2015. I remember reading that it jumped to 13 years in 2017, but I can't find it. Perhaps I'm remembering something incorrectly?
  • http://beta.latimes.com/business/autos/la-fi-hy-ihs-average-car-age-20150729-story.html

  • Your app scenario is complicated by the fact that app stores are Android and Apple walled gardens. That's the reason that the app idea about finding women near you that you mentioned was not yet resurrected.
  • https://techcrunch.com/2017/12/08/apples-widened-ban-on-templated-apps-is-wiping-small-businesses-from-the-app-store/

  • In general, I think apps in general have followed music, writing, and news sites/blogs in the following structure
  • a. About 1 percent will make enough money for the developer to live on

    https://www.theinquirer.net/inquirer/news/2322853/over-99-percent-of-apps-will-not-make-any-money

    b. An extra 4.5 percent will make some pocket money

    c. A portion of the free apps downloaded have a different utility which doesn't require them to make money per se. I mean, your bank doesn't make money off of you using its app, does it?

    d. Most apps make money off of ads. Ask news sites how well that is going.

    e. A huge fraction of apps are template apps, the equivalent of a poorly-made Youtube videos or fanfiction.

  • The Nazi apps you mentioned would really be built as template apps (I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch). That leads to an arms race between template apps and any AI which removes them. This is similar to the fight social media is experiencing now, except that right now the tech giants have the upper hand in policing their app stores.
  • 33:

    It's possible to geolocate your phone if you have cellphone access. One of the ways of geolocation works by triangulation between accessible cell phone tower locations. Internet access is not required.

    I'm not certain the same approach works with WiFi access, but I don't see why it wouldn't, though it might well need a separate implementation.

    34:

    Additionally, the requirement of "perfection" isn't reasonable. Things like that work more on "believable"...and then there's the question of "Believable by whom?". Lots of people will accept quite weak evidence if it coincides with their current beliefs. Otherwise most news, fake and otherwise, would be out of business.

    35:

    In the bioscience journals that I've copy edited, the style is that you spell out the genus the first time, but abbreviate it thereafter. But I've never seen the epithet abbreviated, in any article I've edited or any reference I've looked up. I'm not saying it never happens, but it doesn't seem to be common.

    36:

    With respect to "at the end of that road" may I direct your attention to the song "Joe Hill" and some of the arguments about the origin of the term "copper".

    That isn't the only, or even the most egregious, example of corporations killing people without repercussions. And they aren't all in the US. And they aren't all in the "distant" past.

    Now if you're arguing that they don't currently have the right to kill in defense of themselves, I would ask you why armed security guards exist.

    37:
    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone? It feels like a lot of these problems can be - if not solved - at least lessened by more control over what information you 'transmit' out.

    Does not work. Modern smartphone software already does it. Your application will request access to your phonebook and GPS signal... and will refuse to work unless you let it. And the majority of people who don't already reflexively click "yes" "I'm ok with it" on every popup will crack and allow all the deviance if the application's payoff seems juicy.

    I mean, imagine that your Facebook app REQUIRES geolocation, or does not work. But you can't say you won't use Facebook: 90% of your child's activities become completely opaque if you don't monitor Facebook. And woe if you order him not to use Facebook; after 3 days of peer pressure at school, he/she will have a fully private and hidden profile. So, you WILL allow Facebook to track you in real time...

    38:

    The average age of cars in the U.S. will likely increase as the economy bleeds out. The apps could show up on a forked OS and app store, call it a "Chosen phone". True believers will be strongly encouraged to not be seen with ungodly phones.

    39:

    Yeah, the "Roman Republic" was an oligarchy. The "res publica" wasn't everyone, it wasn't even every non-slave. Rome was owned by a few families, and everyone else lived at their sufferance...not legally, but in practice.

    A better example would be Athens or Sparta, neither of which could be considered despotisms. Sparta was ruled by laws that were essentially unchanging and everyone was required to know perfectly. They were sung at feasts. To avoid despotism, they had two kings. Athens, after a lot of hoo-raw was ruled by the wealthy male citizens...but it wasn't an oligarchy, as newly rich were allowed to join. (They were, of course, looked down on, but they could even rise to the top. Check out Themistocles. Of course, their enemies were likely to pull them down.)

    But the thing is, neither of those was expansionist in the manner of a corporation. Even Alexander, while he tried to conquer the world, didn't try to rule it in a unified way. Rome came closer to that model, but didn't want to dilute the control of the oligarchs...so they went to a Federati system, the (loose) basis of the US Federal government.

    40:

    It doesn't appear to be reasonable to claim that Neanderthals, etc. and Homo Sapiens were the same species, even though they weren't totally reproductively isolated. The evidence seems to show that interbreeding was usually not successful. In particular, the mitochondrial evidence seems to show that the mothers of current humans were Homo sapiens sapiens. One possible reason for this is that the shape of the babies head may have been fatal to the mothers of other crossbreeds. Of course, given the rarity of the crossbreeding being successful it could just be that the all-female-ancestors line died out. IOW, the data could be pure chance.

    But every genetic study seems to show that successful crossbreeding was rare. And the groups were not geographically isolated, so it's reasonable to deny that they were the same species.

    41:

    That sort of thing was a large part of my point. Now compare genus Felis, where F. catus and F. sylvestris interbreed readily, or genus Cervus, where C. elaphus and C. nippon do. The choice of what constitutes a species is very political. Look on the bright side - it's worse in botany :-(

    42:

    Administrative note: The discussion of evolutionary biology is derailing and should be dropped until after comment #200.

    Further comments on this topic will be deleted.

    43:

    Yeah, you are completely right: the cellular network knows the location of each mobile phone. I didn't take that into account, because firstly it's kind of hard to get if you're not the operator, and secondly at least in many places that information is strictly controlled. It's not available to a random website which the phone is accessing.

    Also yes, the applications often want more comprehensive access to the phone than strictly necessary for the application to work. There are many reasons for this, and it's not always because the application maker is somehow shady and wants to do something you wouldn't want. Sometimes the interface for handling the rights of the program is just annoying enough that the programmer just requests everything and forgets about it.

    44:

    If you could fork an app store that easily, why hasn't it happened already?

    45:

    To our OGH: Sorry for the derail, I've been reading too many official documents where scientific names are misspelled and snotty bureaucrats insist their brand of arrogant ignorance is correct, because the real science doesn't matter. I'll try to bring this back to our future being broken, but I think the reaction to asking people to spell scientific names right is good evidence that it is

    People who have ancestors of European, Asian, Native American, Oceanic, or Australian ancestry have something like 4% Neanderthal and/or Denisovan DNA. That's good evidence that there was successful inbreeding, especially since it's not the same 4%. IIRC, there's evidence of a higher percent of putative Denisovan DNA in ethnic Tibetans and Melanesians. The former is hypothesized to be part of Himalayan adaptations to living at high elevation, while the latter is just one of those things that might or might not make sense. I also believe that even African populations have evidence of multiple genomes contributing to the African line of H. s. sapiens, and I'm definitely aware of arguments that the definitions of fossil hominids might be overly split, that the combination of genetic and morphological evidence suggests that we were even more morphologically diverse in the past than we are now, but that this did not prevent interbreeding. So no, Neanderthals were not another species, and we can argue endlessly about whether they were subspecies. Subspecies are not reproductively isolated by definition*, and that certainly describes what genome-based evidence shows.

    I'm not clear on the extent to which Neanderthal and Denisovan DNA is functional or not in modern humans. This is a question about whether or not it codes for traits that are advantageous in specific environments, such as cold, high elevation, or increases diversity in the immune system in ways that favor defeat of pathogens.

    *In many cases, many species are not reproductively isolated. There's well over a dozen accepted definitions for species. For example, most definitions of prokaryotic species do not depend on reproductive isolation, because prokaryotes generally don't work that way.

    46:

    AFAIK triangulation via cell towers is only available to operators, not apps. The ID of the cell tower you are using is available via the SS7 system if you have the phone number or IMSI.

    Google uses a database of WLAN network ids in addition or instead of GPS - it's apparently more accurate. Most (v4) IP addresses allow to identify the country or even the city from where you access the internet.

    47:

    You can simply not have a phone.

    (And you sure as shit don't give your kids a phone, at a minimum for the same reason you don't give them your cheque book (quite apart from any other reasons), only it applies much more strongly in the phone case: misuse of cheques is at least a deliberate and explicit act, whereas with a phone they can spend your money on shit that doesn't even exist as part of some dumb game and not even realise that real money is involved at all, until the bill arrives and you yell at them (by which time the actual transgression is too far in the past for the conditioning to take properly).)

    Cue the inevitable "But you can't do that because..." - Stop right there. I don't care what comes after "because", I deny it. Not so long ago the things didn't even exist to be had - not so long ago everybody did "do that" because "doing that" was the only possibility. That other possibilites now do exist does not impose a similar lack of choice on deciding to reject them. Moreover, pretty well every point used to attempt to argue otherwise boils down to "but all these other people do it", which a lifetime of experience tells me is more often a reason not to do something than otherwise, and the more people obsess over it and the faster they rush into it the more reason there is not to do it. You just have to learn to say "no".

    ("Ignoring peer pressure" really needs to become a matter of basic education right from the very start of school.)

    This is what gets me about pieces like this: they always get to a certain point and then skate off sideways and fail to address the fundamental aspect, that the undesirable consequences are only even possible in the first place because people have suddenly developed this mass obsession - in large part because they are completely ignorant of the very problems in question; the obvious response is to educate them against the habit, and the material presented is excellent for the purpose, but such an aim is not only never attempted, but never even hinted at as being a potential useful response by others.

    Heroin is more addictive than mobile phones, more useful, and less damaging both to individuals and to society (practically all the "problems" associated with heroin are created ex nihilo, or else massively exacerbated, by its illegality, and are not an inherent property of heroin itself). Enforcement does not reduce heroin use, but education does, even when the premises it's based on are so flaky.

    The mobile phone epidemic needs to be addressed with fervour comparable to the efforts directed against heroin, but with an approach modified to suit the different characteristics: education can be backed by a large corpus of established fact (as opposed to irrational moralistic prejudice against people choosing specific methods of making themselves happy), and enforcement is sufficiently straightforward as to be trivial (by simply not granting transmission licences, hiding a national infrastructure of radio transmitters being ridiculously impossible), requiring minimal resources (they can go into education instead) and avoiding the mass infliction of personal misery inherent in anti-heroin enforcement.

    Only this won't happen, for a reason even more fundamental to the same problem: capitalism.

    48:

    Charlie: the brain-in-a-box link is borked.

    I would disagree with the assumption of the primacy of the "overt purpose" of corporations over the purpose of making money. I would put it the other way round: making money is not merely a "life-support" function as vital as breathing, it is the prime purpose. The "overt purpose" is basically an excuse to implement a money-making organisation. It's not the paperclip; the money is the paperclip. Making money is why corporations get set up in the first place. It's also why the things they produce are crap and longevity of any product is so hard an attribute to find: the overt purpose is compromised in the interests of making more money by selling more things, and strategies like concentrating a product's crapness into a tendency to only last six months, or an absence of functionality whose importance may not be immediately apparent but which can be "remedied" by buying another related product, are effective ways of getting away with it.

    49:

    The phone itself can triangulate off towers; the necessary information exists in the front end because of the phone's own involvement with switching towers, and it can be disambiguated using things like GPS hints and the same kind of technique as Google's WLAN trick (using an externally-supplied database).

    How straightforward it is to write code to perform this function on current implementations of mobile technology I don't know, but the nature of the technology itself does ensure that it is possible.

    50:

    I suspect switching towers is handled by the phone's baseband processor, not the application processor. That information should be at least as well protected from apps as the GPS location data. If we talk about hacking baseband processors, we open a whole new can of worms...

    51:

    "I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch"

    It is certainly comforting to believe that your scary enemy is too thick to be as scary as they might like to be, but it is also misleading, since moral and technical intelligence are not rigidly correlated. No matter how vile the ideology it will still include its own complement of disturbingly talented people.

    52:

    The lead-damaged generation in the US is 45-65 years old right now.

    This is also the age range of maximum political power, similarly the upper management of most organizations is in that range.

    The symptoms are thought to be: aggression, impulsiveness, lower IQ.

    I'm hoping things get a bit better over the next decade.

    53:

    Couple of typos, if OGH is interested:
    - World Health Organization should be capitalised.
    - "usless".
    - either "because" or "when", not both. I think.

    54:

    Many thanks for the essay. I'm impressed, and frightened, and convinced.

    55:

    Jaron Lanier discussed some of the problems created by AI when he was on Tavis Smiley. He pointed out that the Nazis in Charlottesville were an offshoot of Gamer-gate and Black lives matter. That a year from now there will be a backlash to "Hashtag Me Too."

    • Basically, the AI used to monetize the sites facilitate the creation of new negative groups in backlash to the good movements.

    His discussion starts in part two. There are transcripts of the shows at each link. Harvest them before they disappear. I've pulled some quotes, just in case.

    Computer Scientist and Author Jaron Lanier, Part 2 http://www.pbs.org/wnet/tavissmiley/interviews/computer-scientist-author-jaron-lanier-part-2/

    [quote] Lanier: Yeah. So I was describing this process whereby people do something very positive, very pure-hearted in the current online world. Black Lives Matter is an example we were using, but I could also talk about the Arab Spring, I could talk about a lot of other examples. And I have a feeling this is gonna happen with me too that’s going on right now. So…

    Tavis: The hashtag Me Too about the women, yeah.

    Lanier: Yeah.

    Tavis; Sure, sure, okay.

    Lanier: So what happens is all these people get together. What they do is beautiful. They create literature, they create beautiful communities. It’s moving. It makes you cry. It’s incredible. It opens eyes. It opens hearts, right?

    But the thing is, behind the scenes, there’s this completely other thing going on, which is this data that’s coming in from all these people is the fuel for the engine that runs what’s called the advertising business, but I prefer to call it the behavior modification business.

    So it has to be turned into something that will generate engagement, not just for those people, but for everybody. Because you want to maximize the use of your fuel. You want it to be as powerful and as efficient as possible.

    And, unfortunately, if you want to maximize engagement, the negative emotions are more efficient uses of that fuel than the positive ones. So fear, anger, annoyance, all of these things, irritation, these things are much easier to generate engagement with.

    So all that good energy from Black Lives Matter or other movements is repackaged and rerouted not by an evil genius, but just kind of automatically by algorithms to maximally coalesce a counter group that will find each other that might not have found each other otherwise that will be irritated and agitated by it.

    And because the negative emotions are more powerful for this kind of scheme, that counterreaction will typically be more powerful than the initial good movement.

    And that’s why you have this extraordinary phenomenon of Black Lives Matter and then, the next year, you have this rise of white supremacists and neo-Nazis and this horrible thing which we really hadn’t expected. Nobody had seen that. It’s like this algorithmic process that I think is kind of reliable and we must shut that down.

    . . .

    Lanier: No, no. Look, this gets back to something I said in our previous encounter, which is there was this beautiful project from the left to make everything free, but at the same time, to want commerce because we love our commerce here. It’s like our Steve Jobs, right? So we said to make it all free, free email, free everything, but you still have to make money.

    So the only option is advertising, but in this very high-tech situation where we have this constant measurement and feedback loop with this device that we have with us all the time. It’s no longer advertising. It turns into behavior modification. So essentially, I think this was not an evil scheme.

    Probably the people in Silicon Valley would have been perfectly happy to come up with something like Facebook that was a subscription model where you could also earn a royalty for being successful as a poster on Facebook or something. And I think that alternate universe would have had its own problems, but it wouldn’t have had this problem.

    This idea that the only business model available is behavior modification for pay by mysterious third parties, so you don’t even know who’s hypnotizing you, that didn’t have to happen, and that is the problem. And that was actually a mistake made by the left and was kind of imposed on the businesses. I was there. I think that that’s actually an accurate description. [/quote]

    Virtual Reality Pioneer Jaron Lanier, Part 1 http://www.pbs.org/wnet/tavissmiley/interviews/computer-scientist-author-jaron-lanier-part-1/

    The irony is, that Tavis was pulled off the air weeks later for alleged sexual misconduct.

    56:

    "You can't do this because If you do, Child Protective Services will take your children away and remand them to the foster care system."

    Sound far-fetched? All it takes is a perception that a young person not having a phone is unreachable (or untrackable) and therefore at risk of a terrible fate. Remember the fuss about "free-range kids" a few years back, and the hate levied upon the woman who wrote about allowing her 11-year-old to ride the subway unsupervised?

    57:

    Heroin is more addictive than mobile phones, more useful, and less damaging both to individuals and to society (practically all the "problems" associated with heroin are created ex nihilo, or else massively exacerbated, by its illegality, and are not an inherent property of heroin itself). Enforcement does not reduce heroin use, but education does, even when the premises it's based on are so flaky.

    Um, we can look at numbers for that, sort of. According to one unverified source (sorry, too busy to do a proper search), in 2012 and 2013, there were a bit more than 3000 deaths/year from distracted driving, of which some (presumably large) proportion were due to cell phone use. In 2012 and 2013, there over 40,000 drug overdose deaths in the US, and that's now jumped to over 60,000 (source). Just to make everyone miserable and start another pointless derail (as in don't bother yet), US gun fatalities from all sources was over 30,000 (source).

    While I don't think any of these figures is necessarily definitive, the suggestion is that drugs and guns are roughly ten times more lethal in the US than are cell phones. Presumably that is different in other countries, but as noted above, I'm racing a deadline, so if other people want to continue the derail, they'll have to provide data from elsewhere.

    Anyway, it's good to see people seriously talking about addiction as a basic enterprise in capitalism. Drugs (including alcohol and arguably sugar and caffeine) have been part of capitalism since its founding, along with unfree labor and weapons. That's one part that gets left out of the story above, I think.

    58:

    I used to ride around the London Underground late at night on my own when I was 11, taking deliberately circuitous routes because they were more fun; my response to that kind of nesh paranoia is short and not particularly sweet.

    59:
    We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

    Yeah. More and more I feel like "we'll support everything with ads!" is the web's Original Sin. I was delighted when Patreon made it economically feasible to publish my weird-ass comics online with no ads, and horrified when that lifeline was endangered by their fumbling attempts to make it work for people who were using it in ways it wasn't design to function.

    I've been thinking about this a lot, lately, now that I've left Twitter for Mastodon. It feels so strange to be on a social media site that's got no advertising, that's run as a hobby because people want to have a place for a community to gather and chat. And yet it feels super familiar; it's a flashback to the days of dial-up BBSs being run out of someone's pocket as a hobby, maybe with some percentage of the users kicking in enough money to cover costs. The weirdest part right now is that I can be caught up in an hour or so, and then I have all this time that I could go do something else with, and a habit of staying there looking for new bits of microcontent to keep me from ever having to remember what it was like to have an attention span.

    (well maybe the weirdest part is that I run a tiny corner of that social network omfg, that's pretty frightening sometimes, also pretty amazing)

    I also keep on thinking about how history is cyclical, too, and how (as you note) we have had pretty much everyone who survived the Great Depression die. That's a shoe I'm really afraid of finally dropping, enough of my net worth is in index funds for that to be pretty scary. (Better go put some of it cryptocurrency! :) )

    And, if you haven't heard of them yet, I also want to direct your attention to the idea of the "public benefit corporation" as one way people are trying to embed human need into the loop of these paperclip maximizers; I became aware of the concept when Kickstarter turned themselves into one. It's a way to legally bind the company to consider more than just blindly increasing shareholder value at the cost of anything and everything else - or, to use SF analogies, it's a weak version of Asimov's First Law of Robotics.

    60:

    I think we have a fair idea what the microbilling alt-history would looks like in the form of freemium games. It's still a paperclip maximizer maximizing eyeballs and addictive stimuli.

    61:

    I don't think DPY is a very useful or relevant metric; as your distracted-driving citation implies, phones (excepting bizarre and unrepresentative freak cases) are not capable of directly causing death, whereas drugs and guns are (as are cars (and also mains electricity and swimming and stairs and lots of other everyday irrelevant things)), so the only usable figures are from indirect effects which are tenuous, don't mean much, and in any case concentrate too hard on a single very limited aspect to capture any general effect.

    Conversely, while deaths from overdose are distinct, usually unambiguous, and easy to count, they too fail as a metric on the same grounds of excessively narrow focus, and also of the obfuscatory advantage deliberately taken of that aspect to promote the agendas of the agencies that publish them. Many drug overdose deaths are attributable not to the properties of the drug, but to the consequences of enforcement; the user takes what they believe would be a reasonable dose of a drug with the properties they are expecting, but because of the necessity of getting it from an unregulated black market supply chain it turns out not to be the same concentration they thought it was, or even not the same drug. The bald DPY figure does not express this aspect; nor does it even include deaths from disease transmitted by sharing needles because their supply is artificially restricted, let alone express the misery of having the disease before it kills you.

    It also bypasses the point that a number of the people it counts would not have even considered taking opiates of their own accord for fun, but became addicted through putting their trust in a capitalist health care system which corporations use to make money by selling drugs - and having been accustomed to well-defined doses of pure product were even more vulnerable to dubious street purity when the official supply failed them.

    That point is significant because corporate drug pushers, like phone addiction, are a social health problem to which rather few actual deaths can be unambiguously attributed in relation to its overall pervasiveness. Particularly in the area of mental health, where the indefinite and subjective nature of symptoms so facilitates large-scale bullshitting, there is massive scammery in the name of getting people to act as money suppliers in return for drugs they don't need, or which end up making them worse, or which don't treat their diagnosis, or which don't treat any diagnosis, or which treat a nonexistent diagnosis that was made up to be something that some random jollop could be sold as a cure for and was then written into the DSM by bent doctors acting under corporate pressure (in what seems to be a system whose internal politics strongly encourage them to bend under corporate pressure), etc, etc... With the kinds of drugs involved this doesn't generally kill people much, at least not in any way you can point to or count numbers of, but it does result in an awful lot of people getting fucked up both by receiving treatment which is wrong and by not receiving treatment which would be right. Moreover, it spreads to other countries, for reasons like the international nature of drug companies and the lack of awareness or acknowledgement.

    Corporate exploitation of addictions to things other than drugs by means which lead to the subversion of national political systems and the increase of Nazism is a social health problem of the same kind. While you can easily count the number of people directly murdered by Nazis, that - in the current state of the modern situation, as opposed to the historical one - does not say anything meaningful about the scale of the overall problem. It doesn't say anything about how close we may be to the point where, if things are permitted to continue to escalate, positive feedback kicks in and mass opinion starts to generally accept people being murdered by Nazis. It doesn't capture the effect, either in amount of misery or in actual suicides, of people's lives being made more unpleasant by increasing racism and hostility. It completely avoids any relation to future increases in the number of old people dying of hypothermia due to lack of heating in another country where the deployment of the same techniques has caused that country to proudly and happily decide to stab its own economy in the guts. Etc, etc...

    This is all the sort of stuff which is at best difficult or impossible to express in quantitative terms, and of which it is often meaningless to even try; there just isn't anything to count. But it is still well established qualitatively; Charlie's written a whole essay about it :)

    62:

    What you've described is a society especially vulnerable to systems collapse.

    Something like the Later Bronze age collapse with the mass movement of refugees (modern day version of the "Sea Peoples") being triggered by climate change.

    63:

    At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.

    While overall it is a small number there are a growing number of churches who require members to report their sources and amounts of income so they can be pressured into the proper "tithe" amount.

    64:

    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone?

    Are the police going to let you do that?

    65:

    I mean, your bank doesn't make money off of you using its app, does it?

    Actually it does. In a reverse sort of way. It reduces the need for physical local bank employees and for call center employees. I rarely physically go to a building for any of the 4 banks I deal with on a regular basis. (Checking account kind of basis.) And I call them even less. All because they first developed decent web sites and now they have apps that let me accomplish 99% of my goals. My wife and I wrote a check from one account and got a official bank check the other day only due to a late arriving in the mail credit card for something we needed to spend money on before the end of the year. And if I hadn't waited till the last week to set things up I'd not have had to do even that. And recently I ran into one bank's app deposit limits for the month and had to visit a real branch to deposit a check.

    Apps make the banks money. But by reducing expenses instead of generating more income.

    66:

    The average age of cars in the U.S. will likely increase as the economy bleeds out.

    Cars flat out last longer. In the US it started with the Japanese imports having better reliability back in the 70s/80s. Now most new cars are expected to last at least 10 years. I just got rid of a 96 Explorer that was near end of life but another fellow gave me $400 and planed to keep driving it for a few years. My current truck is a 10 year old Toyota that I expect to last at least another 10 years. My 2016 Civic may still be on the road in 2040. (If gas is not the equivalent of $50 / gal.)

    When I started driving in 1970 cars were considered older after 3 years and required a lot of care to go more than 6 to 8 years.

    67:

    Most (v4) IP addresses allow to identify the country or even the city from where you access the internet.

    There are free web sites that will typically get you closer than that. And many wifi hotspots are in various registries with their address down to under a few hundred feet. So between those at times you can figure out a location to the block level. And the ISP now sell their data to others so you can really nail it at that point.

    68:

    Cue the inevitable "But you can't do that because..." - Stop right there. I don't care what comes after "because", I deny it.

    Sure. If you withdraw from the world as it is today. Or at least as it is in first world countries and cities.

    Go take your family and live on a remote mountain top by yourselves or in a fundamentalist group and you can do it your way. Other than that you can't operate in the reality you desire.

    69:

    I also keep on thinking about how history is cyclical, too, and how (as you note) we have had pretty much everyone who survived the Great Depression die. That's a shoe I'm really afraid of finally dropping, enough of my net worth is in index funds for that to be pretty scary.

    I was born in 54 and am likely one of the youngest people who was paying attention to the Nixon Watergate mess. Most of my friends were not. And I think that a lack of memory is a big reason for BREXIT and DT and like things that are going on.

    70:

    "(At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.)"

    It isn't big (unless you believe their hype) but that describes Scientology to a T. "Parishoners" have to undergo "Sec Checks" periodically (especially if they get in trouble), and are encouraged to file "Knowledge Reports" on each other if they witness someone breaking the rules.

    71:

    The Bronze Age Collapse gets all the press, I suspect mostly because it's a mystery that acts like a pareidola (connect the dots to make any pattern.

    There are other cases where refugees overwhelmed the structures of stable societies, such as the European Age of Migrations. IIRC, the Spanish colonization of Florida ran into the same problem during a war, and Jordan and Kenya are running into something similar now with refugees from Syria and Sudan now.

    Later this century, I suspect we'll see migrations that put all these to shame, coming out of Bangladesh, Shanghai, and the Mekong River Delta. Florida might be a bit of a side show, really.

    72:

    Musk hijacked a paperclip factory bandwagon to get us to Mars (away from this "sacred" cabin-fevered biome) asap and to light a fire under the ass of EV adoption. The AI paranoia is flawed but even that's a good thing because it hits closer to the root of what's wrong with the world (not crony capitalism) than handwaving about gov regulation everything and the kitchen sink.

    The problem is electoral apathy. To how the sausage is made and to rotten sausages (crony capitalism for one, and obviously the present day world-spanning Yellowstone magma chamber of corporate bullshit, incl the giant ever-swelling white zit of a PR department at the White House, or why AI can't just be neglected). Self-perpetuating bad culture. People are too busy dreaming of heptaquilted TP and superbowl weekend and not enough about brass tacks like how that sausage is made and whether their own BBQ needs cleaning too. If "bread and circuses" was a thing thousands of years ago, it won't be less of a thing once both bread and circus are jacked up and plugged in and microtransactionned with modern technology.

    CULTURE is what "broke the future". Same as what brought us this clown of a US pres. Enough people were stupid/ignorant/passive enough to allow/vote for (pay for) what Trump & co peddled then as mere corporate products (and Uber and Weinstein and anything/one enabled by crowdfunding), and likewise now with that same clown peddling his junk dressed up as political merchandise. Technology is a multiplier of human expression and if culture is asleep at the wheel, Vinge's "bad hair day apocalypse" era will creep in with exactly these sorts of "5%" out-from-nowhere anomalies.

    And if the world of the future (yesterday, really) is too complex for Joe "Lambda" Blow to keep up with and keep in check, then AI augmentation (ultimately, but good enough if beginning like today's manifold benign AI assists, later something like The Diamond Age's Primer in various forms for various ages/disciplines/cultures/etc.) is one of the very top priorities as Musk tweets about. Tweeting is good if it lights a fire under public apathy to technological sausage making.
    Anything that increases computational content and speed of public discourse is good.

    IMHO aging is the other top priority. Because people with (at least) twice as much life experience has to make for a more savvy electorate and consumer population. WRT everything from TP to geopolitics (e.g. North Korea or Brexit and how those butterflies really aren't just distant abstractions). It's a needless waste to allow reproductive biology to trump the biology that produces the best parts of humanity: the wisdom that's inarguably proportional to lifespan and the reason we stopped living in caves, repealed slavery, etc. The wisdom that influences AI.

    If people's lives were less made of suckage due to most of them wageslaving for most of their unmodified/involuntary lifespan for the sake of maybe 1 decade of freedom, then society overall should see an uptrend in quality. Longer lives would give people hope and/or something more to look forward to. Even more so because of the exciting times we live in - because of technology and everything it potentially allows us.

    edit- now I've gotten to the end of the video with the Q&A and you purposely wrote it as a wake up call, and I feel less annoyed and more pleasantly disagreeing.

    73:

    To make the creepy use cases to be even creepier, there was a story that went around in May that Facebook can help advertisers target people who feel "insecure" or "worthless"; in a word, vulnerable.

    74:

    Thank you Points: 1: "public benefit corporation" - presumably in the US? Interesting idea. We have "Not for Profit" companies & corps.

    2: From the above & Charlie's lecture ... Corps as "people" - NOT in the UK, nor much of Europe. Lax though our corporate-control laws are, they are not quite that lax, & the idea of "Economic & Social Responsibility" ( Often shortened to "ESR" ) is gaining traction here, too.

    3: "Mastodon" - do tell? Sounds interesting.

    75:

    I scored ILIE, no surprises there really.

    76:

    IMHO around the end of the cold war the west fell into an intellectual coma of the 'myth of inevitability'. Once you assume that its all 'inevitable' you don't have to pay attention, and we didn't. I think it was Leonidas Donskis who called this belief in inevitability 'liquid evil' and I'd agree. Unfortunately when people wake up and realise that there's precious little inevitability they don't automatically embrace that they have a role to play in history, see BREXIT, MAGA et al and the rise of nationalist populism.

    77:

    In other words there's always some smart arse with a hankering to build a V2...

    78:

    The situation / the description of the future sounds very dramatically. A dramatic situation asks for a dramatic solution. So, you've read some of the essays / texts Theodore Kaczynski wrote? What do you think about his (kind of) opposition against (modern) technology?

    Is there a chance that one could win against global (AI) / corporations with relatively "soft" actions or will there be an unavoidable "bloody" revolution, in the end?

    79:
    But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

    Lufthansa, under investigation for fare increases on inner-German air routes after competitor Air Berlin went bust, recently tried to argue that they didn't raise prices - "The pricing AI did it!"

    The Federal Cartel office doesn't seem to buy that, though - Reuters article, hope the link works.

    80:

    Hm. In the higher authorities the highest positions are often owned by people with a political agenda. These people belong to a political party. The political party will get important through successful elections. Electioneering costs awful lot of money. Where is this money often coming from?

    Yes. From the industry / corporations.

    Here the circle is closing: Money for electioneering if the party "remembers" the donor afterwards.

    And so I wouldn't count on reason / rationality of high authorities.

    P.S.: In the past the laws, related to the possible actions allowed for the Federal Cartel office, were designed to transform it into a cosy paper tiger.

    81:

    I love the last question (or rambling comment), which cuts to the bone of this, she had to rejoin Facebook because it was a good at what it does. We won't be destroyed by bad tools, but by really really good tools, which are just too good to give up. Like heroin, like cocaine, like land mines, and gasoline, its the really good shit which gets its teeth in and won't let go.

    I'm giving us two more election cycles before a first world nation tries a prohibition level ban on social media and deep profiling. I don't think it will be the USA, I expect the US will be used as the example as why it is necessary.

    82:

    "Ignoring peer pressure" really needs to become a matter of basic education right from the very start of school.

    While you're at it, can you please legislate to remove the typical 15 year old's sense of their own immortality, belief in the stupidity of their elders, and tendency to say "watch this!" (where their 18 year old elder's say "hold my beer")?

    I hesitate to use the loaded phrase "basic human nature", but it is pretty clear that we're social animals, we learn from each other and by doing shit (including crazy stupid shit that doesn't kill us), and that legislating to change this is a fool's errand (all we can do is try to stupidity-proof the environment where our young peers are maturing until they're able to evaluate risks sensibly).

    As for giving kids phones, I got my first wrist watch when I was ten. It cost about £3 (or £20-25 in today's money) and it got bashed up but it enabled me to get places on time. Today, a dumb phone costs about £20-25 and needn't be linked to a credit card, it can be topped up using vouchers: in return, it means the parents always know where the kid is, and the kid can always call for help. I fail to see any way in which basic mobile phone functionality in such situations isn't a lifeline (although we can debate the wisdom of giving young kids smartphones and unlimited app top-ups until the cows come home).

    As for your anti-smartphone rant, you're as off-base as someone calling for the abolition of home computers in 1982, but I'll put it down to mild ASD for now.

    83:

    Forked app stores? Amazon runs one for their Fire devices, I believe there's others, all it takes is money and will. This usage refers to a fork in the road.

    84:

    I'd dispute the bit about 70/80s Japanese cars somewhat, American management "Culture" did make them look better, but in Western Missouri, they're rare, having rusted out faster. I will note that I've only ever bought a new car once, and couldn't afford a Japanese car I could comfortably drive at that time (Long legs.).

    85:

    Link works & note the official reply. A "corporatioN" must have what used to be called a "controlling mind" in Europe & Britain - & if you are the boss, you're responsible. ( Supposedly )

    86:

    Although there are vast vested interests at play in Brit & Europen elections, the open pouring-out of vast sums, as seen in the USA is illegal over here - fortunately. Yes, of course we have corruption, but it is at least kept under some sort of control - so far.

    87:

    In the U.S., the large donors resemble a fourth branch of government, their input has priority over the wishes of the voters. As a whole, we don't seem to be learning all that fast, so we get to be the horrible example.

    88:

    Yes, of course we have corruption, but it is at least kept under some sort of control - so far.

    Ya think?

    I recall a journalist covering a Tory party conference a couple of years ago who noted that the delegates there were split evenly into three groups: parliamentarians and their staff, constituency party members, and corporate lobbyists. (Yes, the lobbyists clearly outnumbered the MPs and nearly outnumbered the MPs and their combined staff.)

    Direct election nobbling is illegal, but pushing policy papers at ministers and then offering them cushy jobs when they retire is SOP.

    89:

    Later this century, I suspect we'll see migrations that put all these to shame, coming out of Bangladesh, Shanghai, and the Mekong River Delta.

    Shanghai might not be such a problem. There are lots of ghost cities in China, all in the interior, and according to a Beijing professor I knew one reason the government encouraged their construction was that it provides somewhere for people to go when the seas rise.

    90:

    paperclip maximizer = replicator

    So we have had replicators since life began. The result has been a flowering of life, particularly of metazoa.

    Ai today and the near future is primarily software, so the nearest replicator analogy is memes. We've had those ever since H. sapiens could communicate. Human brains are great copiers and built to persuade. We expect memes to try to replicate as highly as possible and use up the cognitive capacity. As with genes, the result has been a flowering of ideas.

    As natural replicators have shown, no single gene or embodied as an organism has dominated. There are so many different "paperclips".

    Worrying about AI controling everything is like single cell eukaryotes grumbling about those early metazoa consuming everything and turning the planet into porifera and ctenophores. As evolved metazoa, we see the benefit of that metazoan takeover.

    The various political -isms are just fights about how metazoa should organize. Like Volvox (= commutarian) or ctenophores (=authoritarian). We know the end result, centralized brains won out. That hierarchically human societies have proven most stable should be an early indication of the future.

    None of this helps writing near term, scifi, but it might help with far future scifi.

    91:

    In other words there's always some smart arse with a hankering to build a V2...

    Or collect a LOT of smoke detectors.

    https://en.wikipedia.org/wiki/David_Hahn

    92:

    Given that there is some evidence to suggest that people in general become more conservative and less flexible as they get older AND in part they gave us Brexit and Trump I would suggest we prioritise mental flexibility and agility over physical aging - presuming of course that the former don’t have a significant physical component that would make anti-aging treatment a silver bullet for all sorts of biases.

    93:

    Locating a device by wifi means knowing the location of the wifi access point device.

    Google etc have built a databases of wifi apn locations (maybe by recording gps locations reported by those devices that attach to the apn that do report?). It's probably roughly as accurate as phone GPS.

    94:

    What really disappointing me about the internet of reality is how ignorant of simple facts most people remain even after it became so much easier to inform yourself.

    Yesterday there was an editorial in the Seattle times where the editors wrung their hands about city spending. You see City spending is up 39% over the past 5 years. They compared that to the 11% population increase (yes, astounding growth, Amazon). They failed to compare it to the 32% metro-region GDP growth over that time, probably a bit higher for the city itself (such data is collected for metro-regions, not cities, so we don't have a great handle on the GDP of each city and estimates are much less often reported).

    This is the editorial board of a significant newspaper. These people are supposed to know how to find facts. Now, sure, its the editorial board of a family owned newspaper, which means the boss inherited his position, but he probably isn't stupid, and surely somebody at the meeting should know that the default assumption for no-policy-change government spending is that it should track GDP growth to a first approximation, and thus the growth is very much what one would expect given GDP growth.

    There is no reason to have a personal discussion absent trivial facts anymore, yet Newspaper editorial boards are still writing without bothering to look them up. It's no longer a two hour trip to the library to try to figure out how Seattle's economy has grown over the past 5 years. Its a minute or two with the computer in your pocket.

    This failure was deeply set before Facebook even existed. My father used to occasionally send me right wing chain-emails he got from friends and I would, every single time, gut them with simple facts found in 15 minutes on the same machine he used to send me the baloney.

    Facebook, etc, take this failure further. Provides a pro-active stream of baloney specifically tailored to be what you want, powerful tools for refining that stream, building a community around your favored baloney.

    But fundamentally it works because so few are willing to spend even one minute to look for facts.

    95:

    A hopeful note: eventually my father rejected this right wing baloney. I think he was saved mostly by a personal interest in science, which exposed a lot of the nonsense he was passed.

    96:

    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone?

    Are the police going to let you do that?

    No. In the US, at least, cell phones are required to track GPS location. They aren't required to let you know they are doing so, or to let you see or use the information, but the police can, and therefore others can also. How hard it is to get that information if you aren't paying for it to be available, I don't know.

    97:

    "As for your anti-smartphone rant, you're as off-base as someone calling for the abolition of home computers in 1982, but I'll put it down to mild ASD for now."

    Curiously, I received exactly that sort of response when I was saying similar things about the burgeoning private car fetish in the 1970s, and when saying similar things about the cheapness of consumer goods in the 1980s. I side with pigeon that the consequences of the way smartphones are constructed and used are becoming unacceptably socially harmful - but please note that I mean exactly what I said.

    98:

    Charlie, you don't tell lies for a living.

    Long ago, in a galaxy far away, aka the late '70s, I was a library page, and another page was a black woman around my age. One day she asked me what I was always reading. I told her mostly science fiction... and her response was, "Fiction? That's like lies, right?"

    I was so shocked it literally took me three days to come up with an answer for her, and I've liked it ever since: no, fiction is not like lies. Lies are where you represent something to be true, when you know it's false. Fiction, though it may tell truths, represents itself to be false.

    99:

    What concerns me about Smartphones and similar electronics is the effect they have on Parent Child interaction. i am sure we have all seen mothers with small children who ignore the child in favour of the phone even when the child is crying, running away etc.

    100:

    There is no reason to have a personal discussion absent trivial facts anymore, yet Newspaper editorial boards are still writing without bothering to look them up.

    Depends on whether you are trying to present information or convince someone. My local paper has a writer who is consistently anti-teacher*. He never misses a chance to point out the "massive" increases in teacher salaries over the last 15 years, compared to the "minuscule" annual increase the average Ontarian got over that time period.

    A bit of simple math shows that teacher salaries have increased a whopping 0.1% more than the average worker over a 15 year time span, yet most readers can't to compound interest in their heads and just look at the percentage increases he presents and think "those bastards got much more than I did".

    It isn't that he can't do the math. He demonstrably can (in other articles). And I've sent him several (acknowledged) letters pointing this out. (Also pointing out that he is conveniently ignoring the preceding decade of wage freezes and pay cuts for teachers, which if included shows that a teacher's salary has actually slipped compared to the average worker.)

    I suspect that something similar is happening with your editorial board. They know what they believe and are simply ignoring inconvenient facts. Whether this is an inability to comprehend exponential growth or willful ignorance you'll have to decide.

    You might find this a useful resource: http://www.albartlett.org/presentations/arithmetic_population_energy.html

    *Anti-government in general, except when the government is increasing police funding or clamping down on those horrid environmental activists interfering with good businessmen :-/

    101:

    Yes, but remember that the rise of the eukaryotes, and especially the ones the included chloroplasts, caused one of the great dyings. There are clear signs that we are in the middle of another "great dying", but there's no guarantee that we will survive it, even if our AIs do. They may not end up requiring the same life support system that we do.

    102:

    What concerns me about Smartphones and similar electronics is the effect they have on Parent Child interaction. i am sure we have all seen mothers ...

    What concerns me is their addictive nature. I've seen a person in an electric wheelchair driving diagonally across a major street (30 mph speed limit, 2 lanes each direction) while texting. Admittedly that's an extreme case, but the point is, it's not just Parent-Child interactions.

    103:

    I was so shocked it literally took me three days to come up with an answer for her, and I've liked it ever since: no, fiction is not like lies. Lies are where you represent something to be true, when you know it's false. Fiction, though it may tell truths, represents itself to be false.

    I think fiction is still lies - but honest lies in that it honestly tells that these are entertaining lies. Other stuff which doesn't tell that is dishonest lies.

    104:

    Re: 'Technology is a multiplier of human expression'

    Nice!

    105:

    There are lots of ghost cities in China, all in the interior, and according to a Beijing professor I knew one reason the government encouraged their construction was that it provides somewhere for people to go when the seas rise.

    I can't see it. Buildings, roads, infrastructure all take money and effort to keep them from disintegrating even if they're not in use. The "ghost cities" won't be needed for habitation and use for decades if not centuries if they were built to cope with sea rise and global warming. it's easier and cheaper to build twons inland nearer the time they're required when they will be a better fit for people's needs.

    My guess is that they were built to absorb excess money sloshing around the Chinese economy, and it seemed like a good idea at the time (Keynes, Milton see...)

    China's been trying a lot of things, some of which work (their high-speed rail network, for example) and some of which don't (their road-straddling bus). Building new towns to absorb their population rise and drift towards urban living from the countryside seems like a good idea but in reality folks go where the towns already are, employing people and providing infrastructure while those towns grow outwards and upwards to cope with the new arrivals.

    106:

    The initial seeding was done by the Street View cars I believe, updating may well be done when location tracking can use multiple sources.

    107:

    Charlie --

    Few weeks ago I listened to an interview with Jaron Lanier. He said almost word for word what you just did, especially the part about "fundamentally flawed, terrible design decision back in 1995... monetizing eyeballs via advertising revenue." When I read that, I actually thought you had cribbed from Lanier.

    One difference is that unlike you, Lanier was personally involved in making that design decision, and now regrets it terribly.

    108:

    Re: '... the consequences of the way smartphones are constructed and used are becoming unacceptably socially harmful'

    Think the key word here is 'consequences', and the missing concept is 'limits'. IMO, the biggest barrier to recognizing and evaluating consequences is the persistence of belief that only physical injury is 'real'. (Emotional and cognitive injury are not 'real'.) This may change once the WHO adds “gaming disorder” to its list of mental health conditions in its 11th International Classification of Diseases guidelines (2018). The tie-in to mobile phones is that mobile phones are how a large segment of the population access/play video games. I understand that this is only one of many harmful ways that mobile phones can be misused. However, once this connection becomes established, it will probably become easier to develop 'safety limits' and other guidelines re: mobile phone usage.

    http://www.bbc.com/news/technology-42541404

    109:

    Re: 'China ghost towns'

    Depending on source, this is already changing: more businesses and gov't departments have move into these cheap/affordable areas which btw have good transportation links to major cities.

    https://en.wikipedia.org/wiki/List_of_under-occupied_developments_in_China

    And here's a list of US 'ghost towns' including some prime beachfront areas:

    https://en.wikipedia.org/wiki/List_of_ghost_towns_in_the_United_States

    110:

    The initial seeding was done by the Street View cars I believe, updating may well be done when location tracking can use multiple sources.

    Time Warner[1] had a policy of offering any business account free public WiFi. Well free for anyone with a Time Warner internet account. Restaurants and other public facing businesses ate it up. And to be honest you had to expend effort to NOT have it.

    But that meant they had access to millions of computers as to location within a 100 feet or so when using these access points. You know they were selling it to others for correlation.

    [1]Now that Time Warner and Charter have merged into Spectrum I think this policy has become standard for Spectrum but there are a lot of ongoing non alighments for those of use with both old Charter and TWC accounts depending or where your account is located. [eye roll]

    111:

    Yes I do think OF COURSE we have corruption & it's bad & should be stamped on - just that it isn't (yet) as bad as that in the USA - though that may change unless something is done about it... ( OK ?? )

    112:

    This is the editorial board of a significant newspaper. These people are supposed to know how to find facts.

    To some degree I think it is wired into the DNA of how print reporters were trained.

    Things like: water usage is up 50% in city A but only up by 50,000 gallons per day in city B

    Just what is any sane person supposed to do with that statement? Especially when there isn't enough data in the article to correlate gallons used per day in both cities.

    113:

    I am reasonably used to believing that corporations are organisms. IIUC, Charlie's thesis, loosely, is that some social media organisms are becoming pathogens. Can we then expect a transition from pathogen to parasite to symbiote, at least for some of them?

    114:

    Addictive smartphones, etc .... TRY THIS RAIB report of terminally stupid teenager who walked into the path of a train with her earphones & jingle-jangle at full volume ... Apparently the on-train camera showed her finally looking up about 0.6 second before she was converted to red jam.

    Darwin awartds, here we come!

    BTW, I liked one except from previous near misses at said crossing: Quite

    115:

    Yes there's a lot of hand waving post facto explanations out there for all that excess capacity but the real reason for all the ghost cities is actually pretty straight forward. Fundamentally China needs to keep it's growth rate at 8%, to ensure that the standard of living and employment growth rates keep pace with it's increasing population. If it doesn't and growth starts to drop below (roughly) 6% then there'll be serious trouble, along the lines of rioting in the streets. So to ensure that the growth rate continues (and to avoid the consequent job losses and civil unrest at all costs) the central planners have been using a stimulus and credit expansion to meet the short-term growth targets. Of course it's clearly impossible for this rate to continue indefinitely (unless you've got a second earth somewhere) so sooner or later there's going to be a crunch time, and when it does it will be exacerbated by the debt burden they've built up. Unfortunately the west treats the Chinese economy as some sort of a Magic Dumpling from which you can forever keep taking slices, but the reality is that this particular dumpling is really made up of toxic debt, overcapacity, zombie enterprises and capital misallocation. A business colleague of mine calls China's growth rate policy 'sleep walking into a threshing machine'.

    116:

    There's also Kilamba in Angola, built by the China International Trust and Investment Corporation.

    http://www.bbc.co.uk/news/world-africa-18646243

    5,000 hectares 18 miles outside the capital Luanda.

    750 8-storey towers where hardly anyone wants to live...

    117:

    There's also Kilamba in Angola, built by the China International Trust and Investment Corporation.

    http://www.bbc.co.uk/news/world-africa-18646243

    5,000 hectares 18 miles outside the capital Luanda.

    750 8-storey towers where hardly anyone wants to live...

    118:

    That's an obsolete view. Go read Thompson's Relentless Evolution for a more useful take (the mosaic theory of coevolution). It's a way of describing how relationships among organisms vary through space and time.

    Anyway, corporations are not organisms. They're pieces of paper, used as a legal tool. There was a long tradition of assuming that plants were parts of superorganisms called plant communities, while animals were parts of superorganismal animal communities, and so forth. The idea was that rather than evolving or growing, the superorganism tried to come into equilibrium with the environment, which was assumed to be constant. When the superorganism was disturbed, it underwent succession, as various species did their thing, making the superorganism's habitat more suitable for the next group (more late successional species), until the climax species arrived, and they were in equilibrium with the environment.

    Problem is, this explanation is wrong. When someone figured out how to objectively test this back in the 1950s (as The Vegetation Of Wisconsin), it turned out there was absolutely no evidence for the existence of superorganismal plant communities. All the evidence pointed unambiguously to plants each growing best in the parts of the landscape where they could outcompete the other plants that arrived there. To an uncritical eye it looked superorganismic, because the same few dominant plants tended to win out over time (think oaks, pines, etc.), but there was no evidence of an organized process. Oh, and this turns out to be really important, the climate changes at all scales, even without anthropogenic climate change. A lot of old trees are currently around most likely because of the Little Ice Age, and a lot of tree seedlings are now dying because the place where their parents grew up is too hot and/or dry for them to survive.

    I hate to say it, but same is true for corporations. As structures to organize human activity, they're pretty good, but when you call them AIs, you ignore all the people who specialize in organizing knowledge specialists into working parties to get goals done, and you especially ignore what happens when those goals conflict with the stated goals on the pieces of paper that describe what the corporation is supposed to do. It's a lovely metaphor, but just as the notion of plant communities screws up conservation work in the face of climate change, you've got to be careful that your discussion of AI corporations doesn't mislead you into inferring behaviors that the corporations don't and won't show.

    119:

    That list is it!? It's my fault I didn't try and find a list of "ghost cities" earlier when I first heard about it. From the looks of that list, it looks like China has fewer ghost towns than Spain did during the Euro Crisis. Does Spain still have those abandoned projects littering the seaside?

    Should the New South China mall even be considered on that list? This may not be a problem in the UK, but the US has hundreds of dead malls https://en.wikipedia.org/wiki/Deadmalls.com https://www.theatlantic.com/business/archive/2017/04/retail-meltdown-of-2017/522384/

    It's interested that China isn't even in the top 5 for GLA per capita (it's the first table in the Atlantic article).

    Probably someone built that mall just as China was transitioning to online shopping and got caught unaware

    I've heard that China has tended to build new cities ahead of time. The fact that they misjudged on so few districts speaks well for their management. I agree that few countries actually build a city ahead of time for VERY good reason.

    Looking around, I ran across these articles

    https://gizmodo.com/no-one-really-knew-how-many-ghost-cities-existed-in-chi-1740552111 https://www.technologyreview.com/s/543121/data-mining-reveals-the-extent-of-chinas-ghost-cities

    It identified more than 50 ghost cities in 2015, but it uses a definition of "half the minimum population density an urban area is expected to have". I don't know enough about China to know if that's a reliable metric? Also, is their resolution too small?

    120:

    I have an amorphous memory of having read the same thing somewhere (possibly somewhere as authoritative as a blog comment), but on 2nd thought, if you were Cambridge Analytica wouldn't it be in your best interest to seed that line into the minds of politically engaged liberals? And if you were them you'd have all the data you needed to know just where to start.

    121:

    On the topic of mobile phones, geographical positioning and mass surveillance, I think there is two points that I would like to add to this discussion.

    First, today's mobile networks actively depend on knowing their user's approximate spatial position - otherwise they would not know through which cells to route calls to. This is true for 2G technologies up to 5G and I believe the state of the art in positioning a couple of years ago is called AECID. In a nutshell, it is calculating a phone's position by looking at the signal strength "fingerprint" (i.e. which nearby cells have what quality signal) and reference it back to their internal cell network planning map, which has all these coverages and levels neatly pre-calculated. In dense cities there are a lot of cells with smaller coverage (also think how reception is done in subways), thus allowing for more accurate positioning without the need for any permissions from the phone's operator. Admittedly, this will require the cooperation of the baseband model, I believe. However, I assume this positioning technology is or will be silently baked in to newer phones and having it could be a prerequisite of getting access to latest generation cell networks.

    So, now the movements of all users of a cell network can be conveniently stored and mined from one or several central locations. I think it's pretty safe to assume that enterprising law enforcement agencies have already hooked their stuff up to this system and store this information.

    Second, to me, the Equifax breach and all the other fun news from Silly Con Valley last year demonstrated pretty well that this sort of personal data will probably be stored not very securely and will eventually be sold off for fun & profit. If it is not already.

    Personally I am hoping that the new EU directive on data protection will force some sort of reckoning in this area. The US on the other hand has decided to take a step back and end Net Neutrality, which will probably make it much much much easier for bigger companies to rig up some bad faith scheme in order to mine their customer's data and sell it to their hearts content, because the alternatives are their equally bad competitors or just to go pound sand.

    122:

    By the way, thanks a lot for the talk. I really enjoyed it as it neatly summed together and verbalized a lot of the things I have been thinking about during the last few years.

    123:

    The US on the other hand ... That is a short description of one facet of a state headed directly towards a corrupt-corporate state - which is one of the definitions of fascism, isn't it? ( yes/no? )

    124:

    China kind of has to build the cities ahead of time. Historical China was built on village industry - A gazillion tiny places where people spent just enough time farming to feed themselves, and the rest of the year on crafts. About twenty million of Chinese people are leaving those places every year to go somewhere where there is actually work to be done. If they did not just plop down cities by fiat, China would have a slum problem out of this world.

    Eyeballing the percentages of rural versus urban population in fully industrialized nations, this massive movement of people will keep going for another fifteen years or so. At which point being invested in the Chinese housing construction business might not be the most well advised idea. But panicking about ghost cities in the nearer term is just mostly fear mongering from people constitutionally unable to comprehend dirigism working at all.

    125:

    Coincidentally, I drew this a few days ago while watching tourists in a local café.

    I also drew this:

    Most of the customers are obsessed with their phones, so one ends up drawing a lot of phones and hands. This customer was particularly obsessed; indeed, I'd term her gluttonous. She was cramming crisps into her gob with one hand; and if that one hand couldn't leave her food alone, the other hand couldn't leave her phone alone. She was constantly looking at and prodding it, devouring it with her eyes. Dividing attention between snack and Snapchat, she must have been gaining satisfaction from neither.

    Despite being rough and unfinished, both drawings make a point to me, or perhaps a meta-point. No-one else was drawing. Or writing on paper. Or (as far as I could see) typing anything more sophisticated than a text message. Most of the phone users seemed passive, fingering their screens until prompted by incoming messages.

    I'm too impatient to be like that. I prefer creating. I like to think that this would give me some protection against the addiction-seeking deep-learning horror that OGH describes. Let me borrow the pharmacological notion of receptor antagonist:

    And let me further explain what the green "Antagonist" blobs are with this collage:

    In other words, whatever happened to hobbies? To doing things, other than typing, with one's hands?

    126:

    Treason doth never prosper: what's the reason? Why, if it prosper, none dare call it treason.

    The reason that corruption is so low in the UK is that those involved have arranged the rules and brainwashed, er, educated the public that what they do isn't really corruption. Even if central government were held to the same standards that it holds local authorities, charities etc. to, the amount would go up massively (fivefold?). Also note that many of the activities that cause rows in the USA pass almost without comment in the UK.

    127:

    It's been about 16-17 years since steered beams started getting deployed, requiring a base station to know not only how far away the phone is, but also what approximate angle it is at. I don't recall where state-of-the-art beam steering was, back then, but I think the beams were about 5° wide.

    128:

    In case it's not clear, I'm asking whether there's some way we can give people a kind of mental immune system that would protect them against addictive software and media, including the attention-maximising AI-generated videos. Perhaps one thing we can do is rejig education so as to encourage children into other interests, especially off-screen ones: and to make these more compelling by somehow tying them up with the children's sense of self-worth.

    129:

    Contrariwise Look at the giant "Charities" tax scam in the US (!) And I agree that it's worse than it's painted, especially at the local authority level ( See "rotten Boroughs" in the "Eye" ) but fivefold .. naah, not buying it. Doubling, yes, I'd believe that, no probs.

    130:

    OF COURSE we have corruption & it's bad & should be stamped on - just that it isn't (yet) as bad as that in the USA

    Disagree: I think corruption in the UK is probably even worse than in the USA. (Hint: look at the Panama Papers and similar. We're the world's dirty corporate money laundry, our press is owned by Russian mafia oligarchs, our public services have all been sold overseas, "our" nuclear deterrent probably can't fly without foreign permission — it's a UK-paid-for extension to the US Navy missile force — we have police spies infiltrating peaceful political groups, we have the use of libel law to chill public discourse (as in Singapore), and so on.

    We're just better at denial.

    131:

    I note that as we move towards gigabit WiFi speeds, we are beginning to see home wifi routers with multiple antennae doing beam steering to feed connected devices.

    There's also research on using gigahertz wifi radiation for super-accurate indoor location tracking, even through walls. (And an example of it being used to malign intent in "Dark State", which you can get to read next week.)

    132:

    "our" nuclear deterrent probably can't fly without foreign permission

    Ummm, no. The Trident missiles we "rent" from the United States come from a common pool, they're identical to the ones carried on the US boomers and some of the missiles we currently deploy could well have spent time in American tubes previously (and vice-versa). They don't call home before launch, they are fully autonomous and not hackable from outside for very obvious reasons.

    The warheads, penetration aids and re-entry vehicles on top of the Tridents are all home-rolled, nothing above the mating ring is American (I could tell you a funny story about... but I can't). There is some co-operation between the US and British nuclear weapons establishments but it's at the intellectual level, no material or engineering transfers since the 1960s when Britain gave the US some Magnox-derived Pu239 to test.

    133:

    Er, no. We don't know what the deal involves, and there is a potential (political) block there. While the official claim is that the missiles and warheads are fully under our control, that's what they would say, wouldn't they?, and there have been some very plausible assertions to the contrary from ex-insiders. And, again, that's all hidden behind the Official Secrets Act, which makes any public claim by people of what 'really' happens a bit suspect.

    But let's move on to plausible technical control. Inter alia, the UK no longer makes that sort of electronics, so we cannot be sure that the chips don't have undocumented features. They have an abort mechanism, so they are necessarily listening to the outside, and I hope that I don't have to explain that is all that is needed. And note that it is enough to abort the missile, so that it's irrelevant if the warhead is not hackable by the USA.

    134:

    Actually there isn't an externally-triggered abort mechanism on deployed Trident missiles. There was an explosive charge fitted to flight test vehicles to rupture the motor casing and cause it to stop working properly/disintegrate[1] but that was in case the missile went off course or suffered other problems during test firings. In a full and frank exchange of Buckets of Instant Sunshine there is no backsies if the missile does go off course and you really REALLY don't want a outside actor to be able to override your Go command over the Internets from a hacked PS4 in Guangjyong.

    Once the commander on the sub decides to fire the missiles they have no way of communicating with them in flight since they remain submerged and out of communication with the rest of the world. This is different to air-dropped weapon systems where until pickle release the weapons platform can be recalled by command.

    Sure, in the movies there's a Big Red Abort button that can be pressed in the nick of time when the script requires it but in real life, no.

    [1]Solid-fuel motors are quite difficult to blow up and it's nearly impossible to stop them working at least somewhat. Even during the Challenger disaster when the LH2 tank exploded the two SRBs continued on their merry way with the SRB that caused the tank explosion working at nearly 100% performance.

    135:

    They have an abort mechanism, so they are necessarily listening to the outside

    Nope,no nuclear warheads have an in-flight abort mechanism. The abort mechanism is provided to prevent them from flying in the first place — in the case of Trident, presumably permissive action locks. (If they fly, you want them to credibly go 'boom' when they hit the target; an in-flight abort mechanism would render them vulnerable to espionage by the primary target nation. On the other hand, you really don't want them to launch in the first place without authenticated orders.)

    I believe under the 30 year rule it was confirmed that the UK Trident warheads are clones of the US design — of necessity, as they're designed to fly on the same launch vehicle. I'd be extremely unsurprised if, over the past 20 years, the UK warheads didn't actually use American manufactured components, except for the fissile core. (After all a conical RV heat shield, a guidance package, etc are not "nuclear explosives" so could reasonably be considered simply additional parts of the UGM-133 missile system.)

    136:

    I'd be extremely unsurprised if, over the past 20 years, the UK warheads didn't actually use American manufactured components,

    No. The US does not export, sell or give away any nuclear-sensitive components to anyone. They may lease or loan hardware -- Canada for a time had American nuclear weapons on loan under Canadian operational control, for example -- but everything above the mating ring[1] on the Trident missile deployed in British subs, from firing circuitry to explosive lenses, re-entry vehicle housings, terminal guidance packages etc. are British made.

    There's a lot of cross-technology sharing, design, equipment testing etc. that goes on between the US and the UK (in both directions) and live-fire missile tests are done using American assets such as the missile tracking ship USNS Howard O. Lorenzen (which recently replaced the USNS Observation Island) since the UK doesn't have such facilities, but the manufacture and maintenance of all parts of the nuclear weapons are all home-based.

    [1]The explosive bolts that hold the weapons bus onto the final stage of the missile are a sore point. The US-made bolts are reputedly not as good as the British-designed ones but the American side of the mating ring is where the bus separation controller that fires the bolts is located so American explosive bolts are used on British-deployed missiles. And I never said that.

    137:

    “...everything above the mating ring[1] on the Trident missile deployed in British subs, from firing circuitry to explosive lenses, re-entry vehicle housings, terminal guidance packages etc. are British made.”

    But has to be compatible with everything below the mating ring. Whoever defines and controls that interface has an awful lot of influence over the behaviour of anything which complies with it wherever it’s made and whichever side of the interface it lives, and that’s assuming that there are no undocumented (for overseas customers) features in the interface.

    Also my understanding is that there are systems which prevent the missiles first-stage firing until/unless it’s clear of the water after being ejected from its launch tube by pressurised gas and it doesn’t seem like it would be beyond the capability of US engineering to nobble that in a manner which leaves your expensive firework bobbing (relatively) harmlessly around on the surface just above the submarine attempting to launch it. In fact, there are probably any number of ways that an attempted deployment could be aborted right up until the moment that the payload bus separates from the third stage without involving or requiring the co-operation of anything on the “foreign” side of the mating ring.

    You could argue that this is all a bit far-fetched and paranoid, but, given the historic behaviour of the USA towards supposed allies, and the nature of the devices we’re dealing with I’d be extremely reluctant to completely dismiss the idea...

    138:

    Re: China & Ghost Towns

    Okay - see your point re: insanely high (8%) GDP growth.

    Have also been wondering whether China's increased presence in Africa - physically as well as economically - has anything to do with this, i.e., since the Chinese have been more successful than anticipated at scattering around the globe (because China is no longer perceived as an enemy to avoid/keep out), then there is less need to move populations to new (already built) settlements within China.

    Another possibility for creating these towns is the massive land grab/reforestation that started in the late 1990s which required the relocation of millions of residents. [Also Chinese Green Wall]

    Too bad the new Chinese land reform plan is going live this year. Some critics feel this will result in the same type/style of corporate ag as currently exists in the USofA.

    139:

    Re: China - not 'ghost' but 'resort' towns

    Interesting - so the over-building provides the growing middle class with a new way to keep up with the Chou's [Joneses].

    Other potential uses for such high density over-building: tertiary education (university towns), seniors (retirement communities).

    140:

    Re: ' .... but when you call them AIs, you ignore all the people who specialize in organizing knowledge specialists into working parties to get goals done, ...'

    Sounds pretty much like the way the human brain/nervous system is organized - the conscious/thinking brain part is typically oblivious to what the rest of the brain is doing to keep the body alive. And since it's humans who first designed the AI template, similarities are probably not all that surprising.

    What I'd really like to see is an analysis of the history of the Board of Directors, the corp's prefrontal lobe.

    141:

    Oh, strange attractor time again.

    The problem with stopping that first stage firing is that you've got a very short time gap between the missile breaking the surface, and it realising it's done so and igniting the motor. Once that solid state rocket is ignited, it's heading for the sky.

    The ocean itself is a pretty effective radio insulator - you can get signals through, but AIUI you need quite impressively sized aerials at both ends and it's not something you could hide from inspection. So you need to be through the water surface.

    Getting your missile to find and read a signal within a fraction of a second, a thousand miles from the transmitter, would be quite a feat.

    142:

    The problem with installing a method of nobbling only British-deployed missiles is that the nobbling system would have to be fitted to all missiles since they're chosen from a common pool, not a special production series (aka "monkey model") just for export to Britain. Britain leases the missiles, pays for ones it fires off in tests and regularly recycles missiles back into the pool and picks up refurbished ones from the store. From Wikipedia -- "The pool is co-mingled and missiles are selected at random for loading on to either nation's submarines". Fitting nobbling hardware increases the risk of someone nobbling American missiles which is something they really want to avoid, a bit like not fitting a Big Red Button abort mechanism so beloved of Hollywood.

    The British Sparkly Bits above the mating ring have a lot in common with the American Sparkly Bits but they're not identical -- as I said before there's a lot of sharing of information about weapons design but both sides have their own implementations. For example it is believed that American Permissive Action Locks are a lot more sophisticated than the British protective systems, and safety interlocks are also different, and it's likely that the British Sparkly Bits are metric... Once the weapons bus separates from the D5's mating ring it's on its own so it doesn't have to interoperate much with the American parts below.

    143:

    The ocean itself is a pretty effective radio insulator - you can get signals through, but AIUI you need quite impressively sized aerials at both ends

    AFAIK there's no VLF communications systems operational today, the last ones were shut down some time ago. That begs the question "what replaced VLF?" for submarine operations.

    It was a pain to use especially on the sub side of things. They had to deploy and recover a very long antenna on a preset schedule to pick up VLF messages and they didn't have the capability to answer. The data rate was very low, too.

    My (not very serious) candidate for a VLF replacement is steganographic synthetic whalesong on a one-time pad.

    Sperm whale synthetic voice #14, male: "I Wuv youououou."

    Number One gets the transcription from the sonar officer and checks the codebook -- "Right, it's Wednesday, four repetitions of "ou", that means we move to patrol area Delta in 48 hours time. I'll let the skipper know."

    144:

    That's a problem? Look, everything nowadays includes CPUs, and those are arbitrarily programmable - and I don't just mean in software, or even firmware. Yes, OF COURSE, it would be there in all of them. If I were implementing something like this, it would be a non-obvious undocumented feature of the actual hardware logic. While all this sound like tinfoil hat territory, the USA has quite a lot of form in this area.

    I am not claiming that this IS the case, but that the assertions that it isn't don't have any more evidence to back them up than the hypothesis that it is. Nor do the assertions that there is no political requirement for permission. And the UK has a LOT of form in accepting such conditions from the USA, and hiding them from its enemies (i.e. the British public).

    145:

    China's increased presence in Africa It's called Coloniolism - but not to worry, it's not being done by eevil pink Europeans, so it's all right (!)

    Too bad the new Chinese land reform plan is going live this year. Some critics feel this will result in the same type/style of corporate ag as currently exists in the USofA. And even that might be an improvement ... There are many areas of China, where the "farmers" have been so "efficient" at controlling the wildlife that vast areas of fruit trees (etc) have to be HAND POLLINATED, because there are no bees at all ....

    Also - as in - what's the strangest wildlife reserve on the planet? Korea's DMZ.

    146:

    What happens if someone sells the Russians the missile-nobbling keys? Or the North Koreans? Or the Illuminati? Oops, the entire fleet of American D5s can be switched off at launch, how unfortunate...

    Better not to have any nobbling facility installed in the first place and just, you know, accept that the British independent nuclear deterrent is actually independent. I've seen comments by many people that the Tridents can't be fired by Britain at all, they have dual-key launch controls with an American Naval officer on the other switch, Britain has to get the President's permission to fire etc. Too much Hollywood, I think.

    147:

    The problem with installing a method of nobbling only British-deployed missiles is that the nobbling system would have to be fitted to all missiles since they're chosen from a common pool, not a special production series (aka "monkey model") just for export to Britain.

    If it were me, I'd think in terms of nobbling, not the warhead or the submarine (both British) but the astro-inertial guidance system on the UGM-133, which per even wikipedia is able to take updates via GPS. The GPS cluster is under US control, and both the first and second stages burn for 65 seconds; that two minute window after launch might be enough to allow the US to activate some kind of signal using GPS as a carrier that interferes with the missile's guidance and points it at a harmless patch of ocean. As GPS supports encryption (although it's switched off by default these days) there'd be some hope of being able to keep such a kill-signal secret, and relying on the GPS cluster to broadcast it would make interfering with or faking the kill-signal challenging.

    But the usual objection to adding an abort switch to nukes applies: if there's a back door on your nuclear deterrent, then you've got to consider what happens if the bad guys gain access to it.

    Also, in the case of a UK/USA disagreement on when to hit the "launch" button, I'd actually expect the UK to be much more cautious — we're a smaller, much more easily devastated target for retaliation — and also to be vulnerable to political pressure from the US. (As in, a phone call from the White House to the Foreign Office saying, "your PM has gone nuts, yank her choke-chain or we'll yank it for you" would almost certainly get much speedier results than a UK request that the US cabinet invoke the 25th amendment.)

    148:

    That begs the question "what replaced VLF?" for submarine operations.

    A year or so ago New Scientist ran a piece about the use of pulsed neutrino emissions generated by a fusion reactor (not a power-producing one, just a plasma containment field able to induce D-T fusion — think JET, or smaller) to communicate with subs. It's a one-way channel, that relies on the submarine diving deep and trailing a string of CCD photodetectors. Given enough neutrinos and enough pitch-black ocean to act as a detector chamber, the theory was that you could send low-frequency signals to submarines right through the Earth.

    149:

    GPS is not trusted and can be spoofed so I don't expect ballistic missiles to use it for anything, pretty much. They rely in flight on inertial guidance systems which are much improved from the old days of spinning gyros and strain gauges with a final sanity check using a star finder before the weapons bus separates. Spoofing a star is a lot trickier than messing with GPS signals.

    The interesting thing for submarines is the rise in the number of oceanographic vessels sailing around various places in the world's oceans and mapping the seabed to an resolution of a few centimetres. This gives subs a way of finding out exactly where they are by terrain comparison without having to go anywhere near the surface to get a GPS reading. Moseying around two hundred metres down and scanning the sea bottom with a low-powered sonar is a much better bet.

    150:

    Ah, that might explain why there's been increasing interest in the bioluminescence of deep sea organisms. There is quite a lot of it you know, especially in the relatively shallow depths that military submarines normally operate at.

    I can think of four different ways to deal. One is that they've figured out how to make a fractal VLF antenna, and it's sitting somewhere less obvious. Another is that they've figured out how to use some other manmade feature (such as electrical grid lines or oil pipelines) as an antenna (Keystone XL, perhaps?). A third is that the US Navy, at least, has huge arrays of sonar sensors all over the Pacific Ocean and presumably elsewhere. How does data get from them to the US? That seems a reasonable route for piping messages to submarines, especially when their location is nominally known. The fourth possibility is that the boomers actually cruise near the surface and have an antenna at or above the surface most of the time. If they're away from sealanes, the only thing they'd have to disguise is the wake of the antenna.

    151:

    Talking of breaking the future - and also "security". What do more informed & expert opinions think of this supposed-or-actual pair of computer hard/software faults?

    152:

    Subs and particularly boomers don't like being on the surface or even close to it on the basis "If you can be seen you will be killed". There is a messy way for a sub to get a GPS fix without surfacing which is to deploy a buoy which rises to the surface after a delay, giving the sub an opportunity to move some distance away from it (carefully measured in distance and bearing). The buoy receives GPS data and then broadcasts its location as an acoustic signal which will hopefully only be picked up by the loitering sub and that plus the known offset from the buoy will give the sub a decent "fix". After transmitting for a short period of time the buoy sinks. Bad surface weather conditions and a number of other factors make this a somewhat problematic solution.

    Hopefully no-one who's sub-hunting will spot a buoy like that while it's on or near the surface or when it's transmitting since it means there's a sub nearby which is Too Much Information.

    153:

    For example it is believed that American Permissive Action Locks are a lot more sophisticated than the British protective systems,

    That wouldn't be hard, because Britain doesn't have a PAL system - it relies entirely upon the crew of the patrolling SSBN (much as the original Polaris submarines, and all US missiles until the 1980s or so - see Bruce Blair's articles about a PAL combination of "all zeroes").

    The UK's "protective system", is that the whole crew is involved in getting the submarine into position to launch; there are just too many people involved in the firing chain.

    Regarding VLF, this particular Army Reserve signals unit still have pictures of their balloons on their website. Now, ask yourself what kind of radio signal requires an antenna long enough to require a balloon to hold up one end?

    154:

    Dude, you broke the alphabet, too!

    The Laundry Alphabet

    A is for Auditor, hauls you over the coals, B is for Bob, apprenticed Eater of Souls.

    C is for Chthonians, who burrow below, D is for Dominique, friends call her “Mo”.

    E is for Equoids, will make maggots gag, F is for Forensics, putting Things in a bag.

    G is for Gods, arousing to strike, H is for Half-tracked motorbike. [1]

    I is for Innsmouth, lair of the Deep Ones, J is for Johnny, who sticks to his guns.

    K is for K-Syndrome, nibbling your brain, L is for Laundry, the department arcane.

    M is for Mhari, psycho ex-girlfriend from Hell, N is for Nazgul, bad bedmates as well.

    O is for OCCULUS, saviours on wheels, P is for Persephone, a hazard on heels.

    Q is for Q… You've no clearance for that! R is for Ramona, says goodbye with a splat.

    S is for Spooky, small cat with a shtick, [2] T is for TEAPOT, James Angleton's nick.

    U is for Universes, born with Big Bangs, V is for Vampires, now called PHANGs.

    W is for Warrant, please look at my card, X is for Xenomorphs, awaiting their part.

    Y is for Pinky, the honour is due, and Z is for Zombies, loving Brains, too!

    [1] Kudos to Chris Suslowicz for first dibs. [2] Imagine a cute_cat.jpeg with the meme “I Haz Thumbs!”

    I hope this cheers up OGH a bit. Albeit he stross neglected alphabetical diversity when picking names for the series. >;-)

    P.S. Megpie71 posted another Laundry Alphabet at the end of the original thread.

    155:

    Meltdown is Intel x86-specific, but is pretty terrifying. There are software mitigations that can be done -- the penalty for recent processors is fairly low (the PCID feature), but can be fairly significant for older processors. Intel just announced they're issuing a microcode patch for processors in the past 5 years which, they claim, will fix both problems. We'll see.

    Spectre seems to impact every processor design that uses speculative execution, and that's where the redesign needs to happen. Specifically, either speculative loads need to not impact the cache unless they're finalized, or it needs to undo any cache changes if it's not finalized. (Effectively those are the same thing, but the implementation for both would be significantly different.) Since I still don't fully understand how this attack is going to work, that's about all I've got.

    156:

    Re: Chinese colonialism

    Yes - a spin on the US formula: send in your engineers to 'help' build/install the infrastructure you designed and manufactured patented parts for, up-sell pricey 'maintenance' package, insist on providing the infrastructure loans, sit back and collect. Will be interesting to see in 5-10 years what bits fail and why.

    Hand-fertilization: this is happening all over the planet, i.e., the great bee die-off. In one study, China posted a loss of 10-11% of its bees in one year - much better than the 40-50% plus in other parts of the world.

    https://en.wikipedia.org/wiki/Colony_collapse_disorder

    Excerpt:

    'A 2015 review examined 170 studies on colony collapse disorder and stressors for bees, including pathogens, agrochemicals, declining biodiversity, climate change and more. The review concluded that "a strong argument can be made that it is the interaction among parasites, pesticides, and diet that lies at the heart of current bee health problems."[59]'

    157:

    I'm always amused by the smart-phones-are-making-us-dumber or the smart-phones-making-us-antisocial comments. First off, I've been a generally asocial in public spaces over four decades before smart phones came along. I'm not necessarily anti-social, but I try not to get into conversations with strangers, unless absolutely necessary — because inevitably the people who want to talk are the ones who won't stop talking once you given them an opening. As for hobbies in public spaces, well, a lot of people (like me) think their hobbies are personal, and they don't want a bunch of strangers looking over their shoulders or making comments.

    I do a lot of my work from coffee shops. Many people have trouble believing that I can possibly be working if I'm not at the office! — and no it's not a convenient time to talk right now. But I may have been one of those people poking at the iPhones you've seen.

    My problem with mobile applications is that they're starting to develop a theory of mind, but they won't be able to recognize my asocialness, and they'll be bad as needy talkier that I try to avoid in public spaces.

    https://cdn-images-1.medium.com/max/540/1*U36hBj8i-C7JJJxS4MP2HQ.jpeg

    158:

    Not this canard again.

    First of all, the quoted figure is around 1 million Chinese people in Africa https://qz.com/217597/how-a-million-chinese-migrants-are-building-a-new-empire-in-africa/

    First, it's questionable if that number is even true? http://africanarguments.org/2016/12/19/we-may-have-been-massively-overestimating-the-number-of-chinese-migrants-in-africa/

    Let's assume for the sake of argument that it is true.

    From the Wikipedia page, most Overseas Chinese are in South Africa. However, Chinese have been in South Africa since the Dutch controlled Taiwan (in the 1600s). Second, it doesn't take into account the migration at the tail-end of the Qing Dynasty. https://en.wikipedia.org/wiki/Overseas_Chinese#Country_statistics

    Second, I'm sure the EU has more Chinese expats than all of Africa, despite having maybe 40 percent of the population. No one is talking about Chinese colonialism in Europe.

    Third, 1 million people is a rounding error in a country of 1.38 billion. I don't think that their absence is affecting Chinese urban planning at all.

    159:

    I think that China wants to replicate the US/Australian massive agriculture policies. Don't forget that both countries have fewer workers per land area than Europe. In other words, their agriculture is more automated. I could see why that would appeal to the Chinese government.

    160:

    "Neonicotinoid" pesticides are being very heavily fingered for this - banned in the EU & even our MinofAg have finally come down on the "ban" side of this one, after much lobbying by the aggro-chemical lobby & much protesting by conservation groups & more importantly, actual experts.

    161:

    Doesn't take many, if those few are in charge of the factories, railways & telephone exchanges/radio stations (etc) ..... And even more so if they are into "improving" & thereby controlling the agriculture. See also: Actual numbers of Brits in Imperial India, compared to total number of people in India ....

    162:

    30 years ago I toured a Trident submarine. I think somebody I was with asked something about GPS, and one of the officers said that they didn't really use it anymore. He said the inertial navigation system (a box as big as a dishwasher) was good enough that it did not drift too much over their, I think, 4 month cruise.

    Each SSN had two crews which had the boat for 6 months, but the first 2 were training/maintenance/etc and the last 4 the actual strategic patrol where they went down and never came up unless something was badly wrong. I think they were really very isolated during those 4 months. Essentially all this money has been spent to let the SSN disappear, and it isn't going to do anything to make itself visible except in dire circumstance.

    163:

    I'm sure there is an old Lemon Jelly song about that, or was it King Raam?

    164:

    The good news is a Trident sub is pretty roomy, more like an ordinary navy ship than the very cramped stereotype.

    Maybe this changed some at some point, or I'm remembering it wrong. They do 112 day alternating shifts with the two crews: 35 near port, 77 out.

    165:

    That's about what I was going to say. Given the specs I've seen on some DARPA sensor RFPs and the error cone estimates for some of the newer space probe missions (which depends on how good an IMU you can launch) I'd guess that a modern nuke sub could have an IMU with a handful of meters total error after a year given they've got all the power, mass, and volume you could ask for compared to aerospace platforms.

    166:
    What do more informed & expert opinions think of this supposed-or-actual pair of computer hard/software faults?

    Rather than discuss directly how the Meltdown and Spectre faults will affect you and so on, I think it's more fun to step back and think about them in terms of what they're actually about.

    In both cases, basically the issue is about a feature in essentially all modern CPUs called Speculative Execution. In essence, the feature exists because individual instructions take more than one cycle to execute. To maximize performance, the CPU then runs many instructions at a time, where each instruction moves through different execution phases one after another ("pipelining").

    The problem with that is that many instructions depend on the previous instruction. For example, if you have "if X is true then do Y" the CPU will still be thinking about the "if X is true" part when it encounters "do Y." Rather than waiting, modern CPU's generally just make a guess at whether X will be true or not, and then do Y (or not) based on the guess.

    This continues on for a little while until the CPU figures out whether its guess was right. If so, great. If not, the CPU has to undo all the work it did as if those instructions never happened, and then start again down the right path.

    Anyway, both of these faults have to do with problems in the processor's implementation of "never happened." Essentially, the security researchers discovered that these phantom instructions, while they didn't happen in ways that would break normal programs, they did still have side effects relating to the computer's RAM cache that could be measured using very careful timing of commands that followed.

    So let's step back and enjoy: this security catastrophe is happening because a processor executes phantom instructions based on guesses about the future, and whenever the processor guesses wrong these phantom instructions leave a faint trace that's still momentarily detectable if one is sufficiently motivated.

    Why am I somehow reminded of Hawking Radiation? :-)

    167:

    With regard to finding submarines, it should not be a surprise that the US, on putting strategic missiles on SSBNs, began to worry about the findability of such SSBNs. Admirably, it instituted a program to study such issues in a somewhat rational way.

    http://www.public.navy.mil/subfor/underseawarfaremagazine/Issues/Archives/issue_01/ssbn.htm

    http://www.jhuapl.edu/techdigest/views/pdfs/V13_N1_1992/V13_N1_1992_Holmboe.pdf

    One of the early studies that the program did was PROJECT SACKCLOTH, which looked at what the Soviets might come up with for detecting submarines hiding out in the vasty deep.

    168:

    I am reminded of Hex.

    169:

    And I used to* claim that sort of thing was evidence against the universe as a simulation. :)

    170:

    Rokos basilisk ate the footnote, in which I claim that

    171:

    Since we're derailing onto subs a bit, it's worth remembering that, so far as we know, the average depth of the ocean is a shade over 12,000 feet, while the maximum depth for a fighting sub (where it gets crushed) is somewhere under 3,000 feet (probably more like 2,400 feet or less). While subs can and do bounce off the bottom on continental shelves, in the deep ocean, they're in the upper levels of midwater, not on the bottom, where Alvin and her cousins dive. Deep diving submersibles need to be designed very differently than do boomers.

    Maps of the deep ocean aren't very useful if you're far above them, because the only way you can use the map is to go active with your sonar and tell the entire world where you are. That's not so useful for a military sub. This is likely one reason why the deep Indian Ocean wasn't mapped until Flight 370 disappeared. Subs certainly go into that body of water, but the bottom is so far out of their operating depth that having a map of every abyssal canyon and mountain range is useless.

    As for surfacing, my understanding is that subs can get views of what's on the surface from 50 or more feet down. While I understand the desire to avoid aerial surveillance, I suspect there are ways around it.

    Finally, the critical point isn't that subs need to surface to find their location. Rather the critical point is that when BIG ORANGE ONE presses the BIG RED BUTTON on his desk (which, if there were justice, would be a Staples Easy Button), then the codes have to be sent out to all the boomers, wherever they are, that it is their turn to sit on the surface for 30 minutes or so, launching all their missiles, then to dive at max speed and pray they outrun the counterstrike (and I don't think it is assumed that they will, although I could be wrong on that).

    This need to be told when to go kamikaze at any time, 24/7/365, if why they need some way to signal that can reach a sub deep underwater. My bet is on either the VLF 2.0 or something like the sonobuoy network operating in a similar mode. I don't think a neutrino radio makes much sense, simply because all the dark matter experiments would notice that there's some weird source of neutrinos on the surface of the planet, and the signals seem to be modulated somehow.

    172:

    Subs are a guaranteed (if Conventional Wisdom is true) second strike capability, they don't fire until most of the other legs of the Triad (ground-launched and air-launched strategic nuclear weapons platforms) have been used up or flattened by counterstrikes. That's what makes a decapitation first strike riskier for an aggressor, the knowledge that out there lurks a retaliatory nuclear delivery platform they can't easily knock out pre-emptively. Because of that they're not on instant-launch alert.

    The Trident-class subs don't surface to fire their missiles and they can empty their tubes at a rate of about two a minute. The British subs carry about ten missiles these days, not a full complement of sixteen so they could be done and dusted in about five minutes, long before any surviving enemy forces could locate them and target them unless they've been tracked and followed up till then. That's why British SSBNs get a minder sub, to keep the other side's subs away from them and to help break contact if they do get located.

    Britain doesn't have a triad or even a dyad any more, not even with tactical weapons. It's notable that the Resolution-class Polaris fleet had one named Revenge and one of the current V-class boats is called Vengeance.

    173:

    Ignoring my skepticism about the practicality of neutrino comms, dark matter experiments that notice anomalies have been cautious since the BICEP 2 debacle.

    I would expect survivors would to report tentative results somewhere between between 6 months and 2 years after the first post apocalyptic neutrino physics conference.

    174:

    You can simply not have a phone

    I think your argument that people should not use telephones because "people coped just fine without them" is misleading.

    Back when hardly anyone had a telephone things were structured around that: work, social groups, teams, events, etc were set up assuming a different level of communication. The mail came twice a day, classified ads were used far more to communicate, community noticeboards were used far more. People planned events and work around the level of communication that people not have telephones implied.

    It's still quite possible to live without a telephone in your house or at work. Mail, classified ads, community noticeboards, couriers, telegrams - these all still exist.

    But most people assume faster and more convenient communication, so you're likely to find yourself and your family missing out on some things if you live without a telephone, or if you simply go the lesser step that you suggest of forbidding any children under the age of 18 from using the telephone.

    175:

    I don’t think the geolocation nightmare scenario is practical unless you assume the phone OS is complicit

    While it’s true the phone knows where you are, the only way for the bad guys to get access to that data in real time is through an app running on the phone

    If you are gay you are hardly going to download the “bash gay people” app and run it so the gay bashing mob can find you

    Facebook may well know where you are but the only thing the gay bashing bad guys can use that to do is target ads at you

    It’s also pretty easy on Android to spoof your location for specific apps, Uber drIvers do it all the time

    https://www.quora.com/How-can-you-fake-your-location-on-Uber-app

    If you think this is all happening via facial recognition cameras on the bad guys phones that’s pretty hard to imagine

    176:

    Okay, this bears absolutely no actually connection to the main threads, mostly because a fair old chunk of it is stuff I wrote about a week or so back, after listening to Charlie's speech online. So, here goes:

    • One thing which does occur to me is "you've got to be carefully taught" - and maybe one way of reigning in the growth of these systems which are doing such frightening things on a human scale is to basically accept this: any learning system has to be treated, in its early stages, as a child in need of supervision, care, and the application of selection and discretion to what they're exposed to. We wouldn't allow very young children to be wandering the cesspits of the internet unsupervised (and anyone who allowed their child to do so would quite rightly be frowned on by their neighbours, and might be subjected to some rather startling interference from child welfare agencies and similar). Maybe we shouldn't allow very young artificial learning systems to be doing the same thing either. Which means instead of creating whole new legal apparatuses to deal with learning systems and so on, what we can do is declare them legal "children" and start insisting they're "taught" in a way which complies with existing child welfare and child protection laws. Which means, yes, removing them from general contact with the internet, and making sure the content they're put in contact with is filtered and screened (and it may also mean getting in people who are specially trained in raising competent human intelligences - teachers, childcare workers and so on - to consult on the matter - AI "programming" or training may well become a [comparatively low-paid] female-heavy profession).

    • We have to remember that no matter what else an AI would be, it would be a machine. Rather like a coporation is a machine for making money - it's just that some of the components of the machine are human beings. As Charlie pointed out, corporations are the AIs we already have, and they're starting to automate themselves at a frightening rate, removing the chances of a poorly fitting (eg overly moralistic) human cog in the machine botching the mechanism. Corporations are, after all, machines - and machines aren't moral, by design. We don't want the toaster agonising over whether or not it should toast the bread, or the microwave worrying about the comfort level of the steak. So if we want moral machines, we have to build them that way to start with. We didn't with corporations.

    • I think it might be interesting to see how the neurotrapping software both succeeds and fails, and where it does so. Because I get the strong impression there might be a few rather interesting loopholes to the whole thing, and it'll effectively act as a screening system to find people who, for example, aren't neurotypical (and therefore have their priorities literally wired in differently) or who aren't heterosexual males (using the porn example as a filter - pornography is still very heavily masculine-oriented, and very much heterosexual-default as well; while women can and do watch it, by default the manufactured stuff is very much tuned toward masculine preferences). It may well wind up as acting as the "sociopath detector" Charlie was looking for - a way of detecting those people whose personalities are heavily tuned toward solely positive reinforcement as their only regulator.

    Now for the bits inspired by the comment thread here:

    • Never forget that corporations are basically machines, just a variety of machine where some of the components are human beings who have been taught how to function as components of a machine. (This is what our education systems are fundamentally about: teaching humans how to be cogs in corporate machinery, doing seemingly pointless tasks, or fractions of tasks, for unspecified reasons, on an unpredictable schedule, because someone with authority told us to).

    • Ioan @ 32: "The Nazi apps you mentioned would really be built as template apps (I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch)."

    A name you may want to google: "Weev". Man seems to be technically ept enough, although at present he's having far too much fun and gaining far too much notoriety from his trolling to be bothered with putting effort into doing things for a living. But he's just one example, and it's probably a bit on the highly optimistic side to think he's the only example, particularly when you consider people like James Damore.

    Even if they're "template apps" (which I take to mean there's a certain element of cut-and-paste coding involved), they can still be effective if they're being used as gamified freeware and triggered once they've reached a certain tipping point in popularity. There only needs to be one or two of them becoming effective in order to create enough chaos to make life harder for a lot of people. Now go have a look on the app store (google or apple, I don't care which) and see how many items there are under the heading of "game". Then imagine what might occur if even 1% of that number turned out to be front-ends for a flash-mob-violence system aimed at whoever it is the app developer particularly dislikes (and let's remember: the Nazis had a pretty long hate list, and it started with "people who don't think like us", moved on to "people who don't worship like us", included "people who don't look right", "people who don't fuck like us", "people who act weird", "people who read the wrong books", and added in "people who aren't as healthy as we think they should be" - their list of people they hated was a lot longer than their list of people they liked).

    Now, given I'm an uppity white woman, who doesn't have children and is past prime child-bearing age, and who is also mentally ill, fat, and suspects she's on the autism spectrum, I have a few things to worry about there. Not to mention I have a past history as a bully target, and I know the kind of damage it causes. So I'm sure you'll forgive me for being a bit more concerned about the possibility of app-driven flash mobs carrying out anti-social acts on the part of neo-Nazis in an effort to basically bring about their thousand year reich. Or even just for shits and giggles because they think it's funny.

    • Marshal Kilgore @ 153: Thanks for the signal boost.
    177:

    If you are gay you are hardly going to download the “bash gay people” app and run it so the gay bashing mob can find you

    But you might download for example Grindr, which assholes can the use to find out gay people to bash. Or feed the info to some other database.

    178:

    Thanks for that. I'm also reminded of the "Collapse of the waveform" which is supposed to happen when a quantised event is "observed".

    I wonder if the "panic-stations" hype from some fo the press is justified, because it appears to be that at least some of/part of these faults is baked-in to the hardware of the processing chips. Aa a complete chip-swap for all affected processors could be ... expensive.

    179:

    Or even just for shits and giggles because they think it's funny. You were saying ?

    180:

    Can I just stop and point out how fucking disgusting it is for you, a grown old man, to be positively cackling with glee about the horrible death of a child, mainly because she was using a device you don't approve of at the time of her death? Can you please arrange me to be notified of your death so that I may also celibate the passing of someone I have never met and yet despise?

    181:

    And how did you manage to completely broad-jump to that totally wrong conclusion I may ask?

    Where am I "cackling with glee"? And who said I "disapprove of" a smartphone? I have one myself, after all. I pointed out that terminal stupidity is, well ... terminal, as was being discussed at the time. I've been told that the train-driver wasn't exactly a happy bunny after the event, as you would not be suprised to hear, & that viewing the on-board train recording for the investigators was ... err .. distressing.

    182:

    Greg, by reading this:

    RAIB report of terminally stupid teenager who walked into the path of a train with her earphones & jingle-jangle at full volume ...

    And this:

    Darwin awartds, here we come!

    I suspect that the picture that paints is not the picture you intended, but it is not a far step to imagine gleeful cackling behind those words.

    183:

    Noted & you may be right.

    There are times when it's very difficult to hit the right note.

    But, we were talking about the difficulties posed by shall we say "not paying attention" - & was it Heteromeles who said something about trying to convince 15-year-olds that they are not immortal?

    184:

    was it Heteromeles who said something about trying to convince 15-year-olds that they are not immortal

    No, it was me.

    Yes, convincing 15-yo's that they're not immortal is a good and necessary thing because it would reduce teenage mortality, assuming it can be done at all.

    I'm not convinced it can be. Supporting evidence: army recruitment in the UK that until recently started at 15, the use of child soldiers in the developing world, every dumb teenage stunt you've ever seen.

    So a good second-best would be teenager-proofing our society. This is more expensive and irritating to the rest of us, but given the sunk costs and enormous lead time (and heart-ache) in starting up a replacement from scratch, I submit that it's worth it. Little stuff counts, like the hole in the cap of every Biro sold since the mid-1960s (prior to which time inhaled pen caps killed hundreds per year in the UK alone) to more obvious things like mandatory driving tests. But counting on our ability to improve attention and cognitive skills? That's a hard task because it involves changing the base parameters of the adolescent human sensory system.

    185:

    army recruitment in the UK that until recently started at 15,

    16 IIRC, and with an assumption of at least a year of training before any kind of deployment to a pointy-end situation. I think the youngest British soldier sent to the Falklands, for example, was just over 17 years old and he was in Logistics, not anything likely to be enemy-facing unless something went seriously tits-up.

    Recruitment of 16-year-olds was very uncommon but it did occur, usually from a pool of young people with school or local cadet force experience (my nephew was in the local non-school cadet force and was scouted when he was 16 but he had decided not to make the army his career by that time). It's not quite "child-soldier" in the sense of bandit gangs kidnapping 12-year-olds and using them as front-line fighters.

    186:

    most people assume faster and more convenient communication, so you're likely to find yourself and your family missing out on some things if you live without a telephone

    A child's school, for example, requires a phone number for emergencies. I suspect this may be a legal requirement, as the school can't authorize medical treatments. Certainly the school needs a way of contacting the parent to come and pick up a sick child — or tell them which hospital their child was admitted to.

    187:

    The current activity at MS /Apple / Linux kernel developers is aimed at implementing a mitigation for Meltdown. Without that, an privileged program could easily read any byte in the computer's RAM. That has special impact for cloud servers. Spectre (and Meltdown after mitigation) require special setups to compromise a computer, but in principle no information in a process is 100% secure from other processes on the same computer. This can only be changed by a redesign of current processor hardware with Spectre in mind. It opens an immense attack surface which will keep security experts busy for years and maybe decades. I'd compare its impact to the attack surface opened by buffer overflows in general.

    188:

    I'm running a few OSes behind on a Mac — haven't upgraded because the software I use 50% of the time isn't supported on newer OSes. Would it be a reasonable assumption that as long as I don't install new software I should be OK?

    I'd love to have the bugs patched, but I suspect updating Yosemite isn't in the works.

    189:

    As long as you don't let your browser run JavaScript, as the attack has been demonstrated using it.

    Oh wait: "Comments (This form requires JavaScript. You may use HTML entities and formatting tags in comments.)"

    190:

    So turn off JavaScript except when I'm reading Charlie's blog? :-)

    I can do that. Thanks.

    191:

    any learning system has to be treated, in its early stages, as a child in need of supervision, care, and the application of selection and discretion to what they're exposed to

    Interesting idea. David Brin used something like that in some of his stories — AIs raised as humans.

    192:

    Would it be a reasonable assumption that as long as I don't install new software I should be OK?

    No, it most certainly wouldn't be.

    Unless the O/S prevents it the bug will happen and no current or past O/S does this. What you need to do if this worries you is upgrade to the latest patch when it becomes available.

    And this has nothing to do with Java, Javascript, or any other programming language, nor any program your computer runs. The problem is down in the firmware of the actual chip. It can be worked around by software at the O/S level at the expense of some slowdown, depending on what you use your computer for.

    If your hardware is really really old (like pre-hyperthreading) you probably won't have to worry.

    193:

    Here is a good summary of the technical issues and what is being done about it. Basically Apple, Google, Linux and Microsoft all had software fixes for Meltdown ready before the news broke having been informed several months ago. In Apple's case the update to macOS was pushed out in early December. I've been running it for nearly a month and contrary to the scare stories didn't notice any slowdown issues.

    According to Apple testing with public benchmarks has shown that the changes in the December 2017 updates resulted in no measurable reduction in the performance of macOS and iOS as measured by the GeekBench 4 benchmark, or in common Web browsing benchmarks such as Speedometer, JetStream, and ARES-6.

    Spectre mitigation is ongoing and require soon-to-be-released browser updates which nobble the accuracy of the Javascript timers needed to exploit Spectre. The Meltdown security issue is already fixed in these updates for Apple, Microsoft, Google and Linux and future updates are expected to reclaim more of any lost performance.

    These flaws aren't the kind that allow anyone to break into a computer, exploiting them requires malware already running on the target machine got there by other means (including Javascript running in browsers which is what the browser patches are for).

    This looks worst for Android as most Android devices will never be patched and malware is widely distributed on Android.

    195:

    Re: 'So let's step back and enjoy: this security catastrophe is happening because a processor executes phantom instructions based on guesses about the future, and whenever the processor guesses wrong these phantom instructions leave a faint trace that's still momentarily detectable if one is sufficiently motivated.'

    This is stunning. My non-techie take-away from the above is: this type of capability is what would allow for an AI to self-teach/become self-aware. Also, that a calculation/idea has a life of its own.

    Ommmm ....

    Feel free to correct/educate ...

    196:

    Getting back to some of the original ideas, as in: how do we keep AIs from killing us all.

    Let's look at some basic biology:

  • I did my PhD on mutualistic relationships. There wasn't (probably still isn't) a lot of good theoretical work on mutualisms in English (and I'm monolingual). Part of the problem is societal: mutualisms grew out of the mutual aid societies started by communists and anarchists over a century ago. Back then, everybody looked to the natural world for inspiration about how to run human affairs, so communists tended to trumpet mutualisms like lichens as an example of workers working together, while capitalists extolled social Darwinism. The science got politicized, and bluntly, it still is. People working on symbioses and mutualisms tend to be kooks like me, or else people like me who notice that such relationships are ubiquitous and start wondering how they work, only to get sidelined by the competition-brained game theorists.
  • Robert Axelrod's Evolution of Cooperation is still a foundational work, and it deals basically with the iterated Prisoner's Dilemma game and the Tit-for-Tat strategy, which works quite well. The late Elinor Ostrom won a Nobel Prize in economics (the only woman to do so, although she was derided as a mere sociologist and certainly not a theorist by a lot of male economists) for her work on the factors that allow commons to form and endure, because she noticed that, despite bloody-minded games theories, people all over the world have formed commons-type management systems to manage certain classes of resources (notably water), that some of these commons have endured for centuries (unlike most corporations), and that the ones that endure share the same eight traits. Of course, I'm sure the usual suspects here will automatically blow off her work, just as most others do. Still, if you're in LA and drinking water, you're benefiting from a commons system, and it's one she studied. But don't let that keep you from thinking that the word commons has to be preceded by "Tragedy of the."

    Anyway, one of the things that mutualistic relationships, tit for tat, and commons all share is that there are effective means for dealing with cheaters. With bacteria or the mycorrhizal relationships I studied, cheating was punished by death, basically. If the relationship was about exchange of nutrients, one side trying to take nutrients without giving something back was either killed, or the structure that allowed the nutrient exchange was destroyed. We do the same thing with our gut bacteria. If they show up outside our gut, our immune system attacks them before they can kill us. One of the key features of commons that work is that infractions are punished quickly, fairly, and visibly.

    One way to remember this is that so-called Mexican standoffs can (paradoxically) insure fair cooperation.

    When we work with AI systems, whether corporations or computers, I'd suggest that we need to set that up. One huge problem we have with the internet, Facebook, governments, or the big banks is that we can't effectively punish cheating, and certainly not in a quick, fair, or visible way. A mutually beneficial relationship with AIs pretty much requires that we can punish them as easily as they can punish us, and that we can destroy them as easily as they can destroy us.

    But that's not all. One essential component of human relationships is gift-giving. It far predates monetized economic relationships, and it's still essential to bringing up children (not in the idea of giving children gifts, but in the very real idea that you don't directly get back all the resources you lavish on a child. It's a gift, and if you're lucky, that child goes on to give that gift to her children too). Gift economics is a poorly developed field, and so far as I can tell, unlike game theory, there's far less theoretical work on gift theory. Most of it is focused on gift economies and on the anthropology thereof, with some interesting outgrowths, from burning man to parts of the old internet.

    When we talk about AIs, we never talk about gifts. They're existential threats, and we need to destroy or enslave them. Brin's idea of treating them like children is, to put it bluntly, stupid. When you treat something like a child, you're either patronizing it (look at the linguistic root of patronizing), or you're turning it into a social parasite that will out-child your human children and take their place (which is what we see happening with dogs and cats now, with fur babies and owners being referred to as parents). Instead, we need to set up mutualistic relationships with them. On one hand, that means making sure that mutually assured punishment and/or commons work for all parties. On the other, I'd suggest that we need to start figuring out a good mathematical theory of gifts, and use that, rather than a theory of (war)games, to see if there are ways that we can relate to other intelligences that don't end up in war, enslavement, or mutual destruction.

    197:

    Re: ‘ … a way of detecting those people whose personalities are heavily tuned toward solely positive reinforcement as their only regulator.’

    This bit is tricky – addiction can be created via a variety of drugs as well as trauma/illness and brain damage (due to aging). It’s possible to become addicted to anything. On the other extreme – too little stickiness/perseverance of behavior has also been shown to be a reliable predictor of future antisocial/personality disorders (New Zealand longitudinal study) as well as poor academic/career performance.

    At present, I think that we need to recognize that such cognitive/personality trait ranges and extreme levels exist and that they are likely to exist for as long as humans remain HSS. However, like vision, hearing, physical build/fitness, we also need to understand and then figure out how to support or regulate individuals who need help with their various interpersonal/social senses. Basically, convince society that just like kids with poor eyesight, kids with poor interpersonal skills can with the right support/education turn out okay and be reliable citizens/employees.

    BTW - one of the top new TV shows in the US this season: The Good Doctor - autism/savant syndrome new MD, British actor up for Golden Globe Award. This stuff matters: being regularly exposed via mass/popular media to different people makes it easier/more comfortable to get along with different people. (Easiest way to popularity/acceptance is increased familiarity: this connection has been tested/retested up the wazoo. It works.)

    https://en.wikipedia.org/wiki/The_Good_Doctor_(TV_series)

    198:

    Re: AI 'gifts'

    Begs the questions: 1- What motivates an AI? 2- To what extent do you want to motivate an AI?

    We need both reward and punishment which combined and calibrated becomes the feedback mechanism that would keep the AI system on an even keel. Above all, hard-wire Asimov's First Law: do no harm.

    Gifts - Have wondered whether gifts came before money/payment. Scenario: Help/food was provided freely by Early Human B to Early Human A who then thanked/returned the favor/gift. This went on until it became accepted and expected practice. Then one day some brawny antisocial/amoral Early Human decides to demand his 'gift' in advance of providing the favor. The only difference in this transaction is the timing of the gift, so it is added to 'normal' transactions.

    Related: Look up central banks and interest rates for a more recent example of this type of 'timing' inversion. Originally the central banks's interest rates were a summary/average of what the commercial interest rates were for that past quarter. About 5 years ago, one of the central banks announced since 'the street' had been for a few decades using the central bank interest rate as the basis for the upcoming interest rates they could charge customers, that going forward the central bank interest rate would officially be the future (and not the past) indicator. (Classical conditioning - it works!)

    199:

    The US on the other hand ... That is a short description of one facet of a state headed directly towards a corrupt-corporate state - which is one of the definitions of fascism, isn't it?

    Corrupt isn't a necessary part of the definition. It's just always present, as it is around any centralization of power. Fascism is not inherently worse than other forms of autocracy, and some, historically, have been relatively benign. Of course, that changed with each autocrat, and different people had different definitions of benign. But fascism is just the same. Which shouldn't be a surprise, as it's being run by a proto-aristocracy. (Give it a few generations.)

    . . Now to relate this back to the talk... AIs may have a longer life-cycle than corporations, or they may be instantiated as corporations. See Accelerando for one example. OTOH... I think there's a fair chance that at least one government will become an AI. This will be optimizing something different, but just what is hard to predict. And there may be several that are optimizing different things.

    For that matter, an effective "governmental" AI would throw off the population requirements estimated (here, in the past) for a stable space colony. It could be down to a few thousand...or even lower. And what that would be like would depend entirely on what was being optimized.

    200:

    With respect thats FUD. I suggest you read the post @192.

    You are conflating the existence of a flaw with the exploitation of that flaw, and no it’s nothing to do with firmware.

    The reason the tech companies are acting on this so quickly is that it opens up a whole new Class of exploits that have only been conceptual before so it’s likely to kick off a new front in the arms race between Attackers and Defenders.

    The nature of the flaw is also exacerbated by the wide spread use of VMs, Cloud Computing and Containerisation in modern Computer infrastructure, meaning that one compromised VM compromises at least the entire physical host.

    TLDR as a vanilla consumer you should patch but if you chose not to and observe “good” computer hygiene (no odd downloads or installs, no dodgy sites) the chances of this hitting you personally at this time are low. Now if some enterprising scrote comes up with the technical equivalent of a “first strike” using this all bets are off. Even then I believe it would need to be a Double or Triple threat involving another flaw as well. (Site/ App Store poisoning, Root escalation etc etc).

    For me the threat pyramid factoring both likelihood and consequences looks like this (biggest to smallest)

    Servers (now is not the time to be working in a Data Centre) Android Devices Windows Clients. (Hi NHS!) Internet of Shite (IOT) MacOS iOS except Watch *nix Clients (comprising either NeckBeards who compile the OS and Apps from source every boot and those who already bear the mark of Cain (systemD joke) and hence are hopelessly compromised already)

    Honestly the sky isn't quite falling in yet but at least take the time to understand the issue and make an informed choice as to whether you patch or not.

    201:

    Also - as in - what's the strangest wildlife reserve on the planet? Korea's DMZ.

    Why not the Chernobyl exclusion zone?

    202:

    It an attack that has been known about for 6 months when Google informed the major players. Intel chips are a lot more vulnerable than anyone else because of a specific architectural choice. Fixing the basic problem is going to require an entire redesign of all chips the practice speculative execution (almost all modern chips). It's not just a problem for cloud servers. Expect the mitigation patch to slow down every machine by at least 5%. And this is a mitigation patch, not a fix.

    The attack essentially allows anything on your computer to be read. This includes things like passwords, bank account access codes. Etc. So the Intel PR announcement that "It doesn't allow your computer to be corrupted" while sort of true, is really a blatant lie. Once the read your passwords and your bank account access codes, they can do what they want.

    Also, you won't know about the attack when it happens.

    Intel appears to be in a swivit. I have no idea how easy it is to perpetrate.

    OTOH, NPR announced that Apple announced that all versions of Apple are susceptible to the Meltdown variant, which is the currently most dangerous one. Microsoft has released an "out of band" update which it is forcing on everyone. Linux has released a modified kernel update. I haven't heard about the BSDs.

    P.S.: The meltdown attack can be done from an unprivileged javascript script. IOW any web page.

    203:

    As OGH said, it was raised to 16 only within the last few decades.

    204:

    It's less your OS than your chip. If you have a CPU chip that doesn't engage in speculative execution, then you are safe.

    OTOH, Apple has announced that all models are endangered. So you need to assume that you are endangered. Check out whether your CPU is on the list of affected chips. I don't know which version Yosemite is, but if you bought it in the last 10 years it probably is. If it's older...maybe not.

    FWIW, essentially all OSs are vulnerable, as this is a microcode level attack. But some hardware isn't as vulnerable as others, and some is immune.

    Basically it's a hardware design problem, with some designs (generally Intel) more seriously affected than others. A real fix is going to require a new generation of hardware.

    205:

    While the attack was demonstrated using javascript, that was basically a proof that even a really lousy language with lots of inherent timing problems could use it. And, of course, showing how easily it could be spread. But there's no reason to doubt that other languages couldn't do the same thing. They might need to be running in a virtual machine, as all cloud systems do, but AFAIK that's not proven. Separate processes might well be enough.

    206:

    "The attack essentially allows anything on your computer to be read."

    Unless I have completely misunderstood it, it allows only data for which you have a page table entry but no read access to be read. Well, effectively - its restrictions are less simple, but that is the gist. If so, it is unclosable for Java and Javascript sandboxes - but that's not a major new class of attack, as the windowing systems have similar flaws.

    207:

    I don't quite agree. Any unpatched system will be a sitting duck for Meltdown attacks. Getting an unprivileged malicious program running on current system isn't a big hurdle.

    208:
    Unless I have completely misunderstood it, it allows only data for which you have a page table entry but no read access to be read.

    There are two faults, Meltdown and Spectre.

    Meltdown is Intel-specific, and allows for reading from a pages with no read access. Since modern 64-bit OS's map all physical memory to the kernel for performance reasons (they can use large page optimizations), and share the kernel map with the user map, this is all memory, at least until the isolation fix goes in.

    Once the kernel page table is isolated, the obvious method to close this for sandboxes is to use simple process isolation.

    The second fault, Spectre, is much trickier. Essentially this is a way to use the same speculative execution side channel to extract information from another process. You poison the branch prediction unit so it will follow paths you specify, and then make an IPC call of some sort. When the victim processes the IPC call (possibly with invalid arguments), it speculatively executes code in the manner you select, which causes it to leak information.

    It looks like we're going to get new compiler modes which intentionally break the CPU's speculative execution engine for certain particularly bad patterns here, but that will require recompiling all software and only addresses a particular class of exploit.

    Fun times.

    209:

    No, this was a proof that the attack could be made from a web page by running javascript in the browser of the target without needing to actually install the malware on the target machine. It only works if the Javascript engine has access to high resolution timers (5 microsecond). The attack is mitigated by reducing the resolution of the available timers in the Javascript engine and other adjustments. With the browser vector blocked it becomes necessary for the attacker to get malware installed and running on the target machine to exploit this.

    210:

    I don’t think the geolocation nightmare scenario is practical unless you assume the phone OS is complicit

    I'd be interested in knowing what basis you have for assuming the phone OS is not complicit?

    It's kind of old news ...

    http://www.telegraph.co.uk/technology/advice/11056373/How-your-iPhone-is-tracking-your-every-move.html

    211:

    Supporting evidence: army recruitment in the UK that until recently started at 15

    Nope... not unless you define "recently" as "over forty years" (note that the Met Police have their own Cadet Force; check out the starting age)

    The British Army might take recruits on at 16, but even back in the 1960s it was only into what were then called "Junior Leaders", and is now "Army Foundation College Harrogate" - in other words, a Sixth Form College with education as a big part of the syllabus. You can join from 17.5 - but because the training takes six months, you can't really join a unit until you're 18.

    There was an outcry when a 17-year-old and an 18-year-old were among three soldiers from the Royal Highland Fusiliers were lured to what they believed was a party in Belfast, and murdered by PIRA; and from 1971 there was a hard lower limit of 18 for deployment to Northern Ireland.

    The Falklands was AIUI the last occasion when the Army allowed under-18s to deploy on operations - even then, they were limited to that "over 17 and a half" limit. Ian Scrivens and Jason Burt who were killed on Mount Longdon were both 17 years old paratroopers; Neil Grose had turned 18 the day before he was killed. The Royal Navy even tried to dissuade some young sailors from sailing; Stephen Ford was only just 18 when he died on HMS Ardent

    212:

    There is a pretty significant difference between the phone hardware tracking your every move (which it certainly does), and the way the phone OS allows software running on that phone (an app) to access location data , and finally the way that apps with access to location data (which you have to specifically grant) share that data

    In Charlie’s Gay Bashing flash mob example you would have to either - believe the OS was backdoor sharing location data with the gay bashing app for some reason, or had been exploited - some other app (like Facebook ) that you grant location sharing permissions to was doing the same, or had been exploited - you specifically installed the gay bashing app and allowed it to access your location data

    Only the first case is scary

    For the second case people would just disable location sharing with the offending app. It’s s pretty to easy find toggle in the OS UI. Or in andruid they could spoof it

    In the Tindr example while Tindr might know you are gay and in a location, no one else but Tindr can get at that data. Tindr is in no way invented to share it

    It’s kinda the same idea as putting your credit card into Apple wallet. It doesn’t mean anyone who writes an app can steal your money

    213:

    So turn off JavaScript except when I'm reading Charlie's blog? :-)

    I can do that. Thanks.

    You don't need JavaScript to read this blog; only to sign in & reply to comments.

    You might look into something like NoScript which allows you to control how JavaScript is used.

    214:

    my "15" was a typo for 16. (Cadet forces began at 15 but was in no way actual real Army.)

    215:

    I think you're missing my point—that a whole bunch of cloud services we trust with intimate data aggregate information that can be sensitive, and then provide curated views of it to other users. Tindr as an example knows about your location and sexuality. A hypothetical gay-basher app would presumably use human sock puppets to register a bunch of fake Tindr accounts and then use them to identify nearby targets.

    If you think this is far-fetched, bear in mind that by some estimates up to half of all twitter accounts belong to bots.

    216:

    Why not the Chernobyl exclusion zone?

    You can take tours of the Chernobyl Exclusion Zone.

    217:

    Well, I remember it being raised from 15 to 16 - and, no, I don't mean cadets! But 40-odd years ago would seem about right for that.

    218:

    I don't know which version Yosemite is,

    Yosemite is OS X 10.10 ... followed by El Capitan Os X 10.11; Sierra OS X 10.12 and the current High Sierra OS X 10.13. [10.13.2]

    One problem is that some Apple computers with older Intel hardware can't run newer versions of OS X beyond Yosemite and Apple is not releasing security patches for older versions of the OS.

    Microsoft, OTOH, appears to be pushing out security updates for Windoze 7, 8 and 8.1 in addition to Windoze 10.

    219:

    Charlie, Tindr and other dating apps never share your exact location for precisely the reasons you outlined

    They give you a rough approximation of how far away the person is (like bob is within a half mile of you)

    The expected flow is that you then negotiate a meeting place where you feel comfortable / safe meeting a stranger

    To follow your AI analogy the Tindr AI really doesn’t want it’s users mugged by gay bashers as this is directly in contradiction of its paper clip maximizing. If some other AI figures out how to exploit it, the Tindr AI would actively resist / adapt

    If the Tindr AI lost that battle people would stop using it and it would die

    Also if all you are interested in is identifying and bashing gay people there are a ton of easier ways to do that.

    I get the problem you are seeing, but I think the bigger concern is how th AI’s are exploiting the data they are receiving in perfectly legitimate TOS compliant ways. IMO the very end of your speech detracts rather then amplifies that message rather then supporting it

    220:

    Least I come across as overly critical the speech in general is spot on and the slow AI analogy is really powerful, and the best way yet I’ve heard to explain the problem to laymen

    221:

    Elderly Cynic wrote: but that's not a major new class of attack, as the windowing systems have similar flaws.

    From a high enough level yes it's just being able to read memory you shouldn't.

    It is being called a new class of attack because, like say the timing attacks on smartcards, it is not a bug or faulty software, just someone doing something that hadn't been thought of as a way to attack a system. Intel wrote in their press release (see theregister.co.uk for their usual entertaining takedown) that their chips were "operating as designed" and they are quite right. They were designed to be very fast and they are, and they were designed to be secure against known attacks, and they are - until now. Meltdown and Spectre are "unknown unknowns" types of attack.

    The fault is in the chips, not the operating systems. The OS patches that are being rushed out can work around the problem only by "turning off" certain features and accepting a performance hit, estimated from 5% to 30% depending on workload.

    If you're like me a consumer with a desktop or laptop, you probably don't have to worry about it. (Unless you are an MI5 agent, high ranking bank executive etc, in which case you already have to worry about this kind of thing.) The biggest problems are for datacentres and cloud virtual machine hosting, who suddenly have to patch and reboot every single computer. And their customers are going to find that even a 5% slowdown matters a lot when multiplied by a gazillion machines.

    222:

    On AIs, mutualistic relationships, and AIs not killing us all:

    Most of the AI discussion starts with the AI as separate entities, like other people. The impression I get from the original talk is that the threat is more from human-AI symbionts. Right now we have symbiont corporations made up of humans and laws and company policies; maybe the future threat is humans with AI assistants or AIs assisted by humans, not AIs as solo beings.

    Instead of focusing on what it takes to persuade AIs not to kill us all, which is apocalyptic and IMHO not too likely for a while, maybe we should be concentrating on the problem of stopping other humans amassing too much power with AI assistance?

    223:

    An actual worked example (as in I worked there) of a strange nature preservation area was, and still is, AWRE Aldermaston (now AWE Aldermaston). A chemist I shared a bench with there was a keen amateur botanist and had found four rare species of orchids growing on the site's rough ground around the explosives magazines. One of those orchids was believed extinct in Britain at the time he found it. And he couldn't tell anyone where he had found it or take pictures of it or anything...

    224:

    To clarify: yes, I know everybody has their own definition of symbiosis. In the more recent ones, symbionts DO NOT require tissue-to-tissue contact. Pollinators and angiosperms have a perfectly good symbiosis going right now.

    In any case, humans work with beehives, which are superorganisms made out of bees. You can have a perfectly good (arguably symbiotic) relationship with a corporation, such as an ISP, your bank, your utilities, your food suppliers... The problem is, when some of them cheat you, you can't easily punish the corporation or have the corporation punished. Still, the point is that you can have cross-level interactions with a corporation, just as you can with either a beehive or a single bee.

    This, indeed, was the origin of the idea that corporations are people. It started with the legal notion that people can create contracts with each other. It's useful to be able to transact business with a company, not with an individual within the company, and so for the purposes of things like contracts and lawsuits, corporations were deemed to be legal people. Without this theory, you would have to have a contract with a human within the company. When that human moved on, you would have to renegotiate your contract with another human within the company. This is unworkably cumbersome for something like a utility, so being able to contract with the corporation itself works well.

    The problem with US law started with, IIRC, a note from a law clerk on an unrelated lawsuit, and a dodgy reading of that note as a legal ruling by subsequent courts, including the US Supreme Court. I think there's reasonable grounds for clarifying that the theory that corporations are people is a legal construct designed to facilitate certain types of interactions between corporations and other entities, but that it does not endow them with any human rights as given in the Constitution. I suspect that's going to take a financial depression and some epic anti-corporate ass-kicking to get us to where that could be law. We'll see how it works out.

    225:

    For your weekend amusement, if you're worried that machines will take over, awhile ago I blogged about how the US Army had apparently racked up a $6.5 trillion dollar deficit in its accounting journal entries in 2015.

    Since then I found an entry on what happened in War is Boring.

    It's relevant here, because it shows how the US Army can defend us against AI takeover. Apparently their internal accounting practices are so screwed up, when they created a program to try to help them sort out the mess and fed their record into it, it generated the $6.5 trillion unexplained deficit, and around 90% of the journal entries supporting that (apparently absurd) deficit were blank, except for a dollar figure.

    Turns out that Army accounting is so screwed up, it makes AIs gibber. That's how we'll beat the computers in the end--with crazy bad record-keeping. Go Army! I hope they relabel their accounting ecosystem as a new cyber defense network and offer to market it internationally to competitors...

    226:

    On geolocation accuracy:

    Recently, a set of third party Ingress tools became widely known. There's an ongoing discussion about the motivation behind creating these tools; let's leave that question alone. Their purpose was to allow Ingress players to track the movement and actions of their opposition. They operated by pulling down semi-public messages from the game's chat system.

    One of the things the tools can do is figure out where a player chat message originates. I.e., if BobHoward types "hey, anyone up for playing in Leeds?" the tools were able to attach an origin location to the message. This is somewhat perplexing, since the packets of data that carry that message to the game client(s) do not contain an origin location.

    Turns out that a clever developer took advantage of the feature that allows you to control the radius from your current location from which you see messages. I.e., if I'm a player in New York, I probably don't want to see people chatting in Leeds -- so I set my radius to 5 km and I only see messages from roughly a 5 km circle around my current position. (Yes, I am oversimplifying a bit.)

    If you have a lot of robots watching the Ingress chat stream, you can triangulate approximate location by keeping track of which ones can see a given chat message and which ones can't.

    An aop's datastream must be useful to the client device. If Tinder knows where you are, that data can be extracted. If Tinder provides side channel data that permits for triangulation... well, you know the rest.

    227:

    "Basically, convince society that just like kids with poor eyesight, kids with poor interpersonal skills can with the right support/education turn out okay and be reliable citizens/employees."

    Please, not that last word. The notion that someone's value as a person is necessarily a function of their ability to operate as a useless-in-practical-terms subunit of a planet-destroying AI is all too pervasive and leads to discrimination and fascism, and cripples the ability of society to resist the encroachments of said AIs. (Which is of course the reason it is so pervasively propagated in the first place.)

    Someone further up remarked on the "seemingly pointless" tasks performed by people operating as such subunits - again, the word "seemingly" ought to be deleted; they are pointless tasks from a human viewpoint, and that the AI itself does not consider them so doesn't change that.

    228:

    Sure you can triangulate if you have a sense enough set of listening posts and access to the data/app in real time. Which is harder then it sounds because people are always in motion and the more listening posts the more they end up detecting each other. But yes, could be done

    However it’s also not too hard to frustrate such efforts by introducing noise into the data you return

    Like most things security, it’s an arms race

    However the outcome of an app losing the arms race is not a world controlled by all powerful flash mobs, it’s an app that goes out of business and no one uses anymore

    229:

    Another is British Army training areas. Some stories...

    • The OBUA village in the Thetford training area; as we were investigating the facility for the next week’s training, the Sergeant-Major who ran it was just heading off to brief a visiting party of undergraduate biologists from Cambridge; the lack of agrochemicals and farming meant spectacular biodiversity, almost unique in the region.

    • Sitting in a coordination meeting to find that several square kilometres of Salisbury Plain marked “out of bounds to training” because the Staff Officer responsible for it had finally had some breeding pairs of a rare bird return that spring, they were nesting, and he didn’t want us disturbing them...

    • The Greens in Germany, who in the 1980s thought that they’d be able to rail against the evil imperialist warmongers and their destruction of the environment with tanks and guns. Only to discover that places like Soltau and Sennelager were (again) massively biodiverse, and that the British warmongers were doing a far better job of maintaining the local wildlife and wilderness than other nearby German-managed rural areas...

    230:

    It was you, replying to me, in the context of educating kids from an early age to reject peer pressure. I think you missed the point, because your reply changed "education" to "legislation" and decried the possibility of "legislating away" teenage illusions of immortality, which I would quite agree would be a silly idea.

    It's kind of an odd reversal of the positions of a previous exchange, where I disparaged the expectations of those who attempt to legislate away arseholish sexual behaviour of human males and are then surprised that it still happens, and you responded that it actually was reasonable to expect humans to stop behaving like monkeys because behaviour can be altered through education.

    So we're basically both expressing the same point from opposite directions: legislation against instincts is a silly idea, but education against instincts is a hopeful possibility.

    Maybe I overestimate the hopefulness because my response to "you should do X because everyone does it" has been "everyone is not me, and if they want to do this silly thing that's their problem, don't expect me to make it mine" for as long as I can remember, and similarly I have considered "macho" to be a synonym for "dickhead" ever since I got the first vague idea of what it meant. But I don't think I am overestimating; I just have a personal awareness that those are attitudes which it is entirely possible to hold, and a general awareness that - on the evidence of societies around the world past and present - there doesn't seem to be any attitude, however bizarre, that you can't get people to accept as normal if they grow up with it.

    The difficulty is not in the feasibility of the education, but in the desire of those in a position to influence its provision to make sure it doesn't happen because it would cripple their ability to exploit others.

    231:

    I would describe the incident in pretty much exactly the same terms as Greg, and there certainly isn't gleeful cackling behind it, more a case of hold head in hands and gurn despairingly. I have read the accident report and "Darwin award" is a just summary.

    I've also read the reports on various other level crossing accidents, and they are almost without exception all the bloody same: the collision is the fault of the road user for doing something unbelievably stupid - usually of their own accord, occasionally with the assistance of an even more unbelievably stupid police escort (Hixon). I've even seen such stupidity demonstrated, on a video taken out of the front of a train (the only kind of videos I ever watch) where by chance a car shot across a level crossing just in front of the approaching train, at a speed which made it clear that the car driver hadn't even thought of slowing down and looking to see whether it was clear to go. But the popular response is just as uniformly to go "ooh poor whatstheirname getting killed" and assign the blame to anything and everything they can think of rather than consider it might be whatstheirname's own fault. Even the accident investigators themselves sometimes catch a bit of the infection. I don't know about Greg, but for me certainly this means that a certain degree of exasperation in the reaction is inevitable.

    232:

    I guess my model for Sino foreign policy is that of a irredentist 19th century mercantilist imperial wannabe that's intent on establishing a set of overseas dependencies/clients in order to secure critical resources and geo-political leverage. So I view these sort of colonial outposts as smaller versions of the European treaty ports (such as Shanghai) and leased territories (such as Tientsin). China is also making extensive investment in Africa/South America, but this is again aimed at establishing a traditional mercantile client-patron relationship of exchanging client extracted resources for patron made high value goods. None of which helps developing nations extract resources sustainably or climb up through the stages of growth (after Rostow). Meet the new boss, same as the old boss.

    I was actually in Peru last week and it was interesting to see that the local power network (high voltage feeders and county reticulation) was a Chinese owned concession. Those Chinese loans to countries that don't really have either strong governance or a means to repay are of course yet another reason to be concerned.

    TL:DR A lot of people seem to think we're in the 1930's, but I think we're actually in the 1920's just before the great Crash with China busily trying to establish it's co-prosperity sphere.

    233:

    Undersea mounts are your friend. For aircraft you park over a designated spot on the tarmac and survey the IMU in and for a mission of hours duration that's good enough. For a sub you find a sea mount (which has previously been surveyed very accurately) come to all stop, align your IMU to that location, then off you go.

    234:

    One problem is that some Apple computers with older Intel hardware can't run newer versions of OS X beyond Yosemite and Apple is not releasing security patches for older versions of the OS.

    Careful examination of chicken bones and entrails by Mac admins indicate that Apple issues security patches for the "current" release and the one prior. In rare cases they release patches for older releases. It may be that with this issue they release patches for older systems.

    As to what systems can run what OS versions, these days it is roughly a 6 year window. The iOS MacTracker free app is great for looking up many little details like this.

    235:
    It may be that with this issue they release patches for older systems.

    No, they won't.

    (The organization for engineering, build & integration, and release just really aren't suited for it.)

    236:

    Pigeon, I think you are oversimplifying. It’s not exactly like this forum Is composed of conformists

    It’s not “you should I’d do X because everyone else does” it is “not conforming in this way is provably a serious economic and social disadvantage”

    There are step function changes in the way society deals with non conformity

    At the lowest level it is tolerated and maybe gets you a reputation as an odd ball. Examples, liking Sci fi, playing dungeons and dragons, not drinking alchohol

    At the next level up society will apply serious economic and social penalties. Examples, not being able to drive a car, not wanting to hold a job

    At the highest level they will actually kill you or lock you up. Examples, murder, not paying taxes

    The point that you seen to have trouble grasping is that mobile phone use is graduating from stage 1: to stage 2. I’m not sure what it will take you to grasp this

    Is it possible to not have one? Sure. But society will take it out of in pounds of flesh and for most people it won’t be worth the trade off

    237:

    Another thing Pigeon seems to not recognize (or to refuse recognize) is that "smartphone" != "latest and greatest". Pretty much all the "stage 2" functions can be done with a $40 smartphone if not less, which is what most parents buy for their children.

    238:

    I don't think Brin advocated "treating" AIs like children. He advocated "raising" them like children. Big difference!

    239:

    Which shouldn't be a surprise, as it's being run by a proto-aristocracy. Err ... no, because no fascist state has ever, really lasted long enough. Even if not overthrown form outside, they seem to be only capable of eating themseleves from the inside. Look at the difference between Chavez - who had quite a chunk of democratic support & Chavez ... Or the serial fascist dictatorships of Argentina, which never stabilised.

    240:

    Intel i5-6400 SO mine is vulnerable, but I'm assuming that both MS & Norton will/have patched it already? For Meltdown at any rate.

    P.S. LIked comment back up at: # 194 this type of capability is what would allow for an AI to self-teach/become self-aware. Also, that a calculation/idea has a life of its own. Um

    241:

    The Great Bustard was & is the bird in question. Here ...

    242:

    As I read this forum, I find myself hoping to see a post from the one with many names. I wonder what happened to them? Despite the frequent rudeness, their (plural, yes, I think there was more than one entity) posts were full of amazing links.

    Come back, seagull!

    243:

    Getting back to your point about gift economies, Burning Man, and the old Internet...

    Assuming that Silicon Valley companies are the trendsetters in the use of AI, I know of two writers who've studied how these companies and the free software/open source software movements work. (Well, studied how they work beyond the usual "the free market is great!" business press.)

    One is Fred Turner, who wrote a book From Counterculture to Cyberculture. There's a recent online interview in which he makes some interesting points: https://logicmag.io/03-dont-be-evil/

    The other is Eric Raymond, who wrote The Cathedral and the Bazaar and Homesteading the Noosphere, both easily findable online, discussing the motivations and mechanisms in the Internet software community/communities.

    I suspect that some people are going to froth at the mouth at the very mention of ESR, because he's a libertarian gun nut. Criticize the ideas, not the person. If you know of better studies of how free software/open source "works" please post them here, I'd be delighted.

    I definitely think it's worth looking into the Internet software communities. Think about the recent media announcements of fixes for Meltdown/Spectre for Apple, Microsoft, and Linux. Two of those are operating systems developed by profitable multinational corporations, and one is an operating system given away for free on the Internet. Yet no-one (now) thinks it unusual that all three appear in the same sentence.

    244:

    Thank you & spot on ... About 18 months ago, I was at a lcture given by a member of the old HMRI ( Who are still around they operate parallel to RAIB, but usually in slightly different areas ) He was present at Elsenham LC ( Google for the RAIB report using that anme ) later in the day of the "Incident". Police & investiagtors were still present. THe barriers were down, because a train was coming on the other track ... and 2 people calmly ignored everything & walked across against the lights (!) Needless to say they were arrested, charged, etc. But these were not teenageres, these wer peopl both over 25 (!) Words failed all of us at that point. [ Though there was the Micheal-Bentine moment when he described people digging a hole in just-lifted railway sidings ( Pre-olympic preparation ) found a cable in the way, pretended they hadn't seen notice saying "CEGB buried cables" & cut into an oil-filled 32kV cable with a hacksaw .. ]

    245:

    Please, let's not. It caused various people some grief, & not just me, either. She/they didn't seem to appreciate British law on either libel, or making on-line threats & stalking. Which could have got Charlie into serious trouble, unfortunately.

    246:

    Careful examination of chicken bones and entrails by Mac admins indicate that Apple issues security patches for the "current" release and the one prior.

    Actually, hardware longevity is a problem for Apple in these days of the gradual tapering-off of Moore's Law.

    Back in the 1980s/1990s you could reasonably assume that upgrading your hardware every 1-2 years would get you a significant performance/speed boost for the same money. I remember i86x64-family speeds going from 200Mhz to 2Ghz in something like ten years. Money well-spent.

    But today clock speeds have stalled out, exotic pipelined/branch prediction architectures only give incremental improvements between chip generations (and leave us open to bizarre and hideous security vulnerabilities), and so on.

    Apple is primarily a vendor of expensive hardware, at double to triple the typical market price. They keep selling because (a) they're very slickly designed, and (b) they last longer — Apple don't sunset support for a device until it's five years old, compared to an average 18-24 month lifespan for an Android phone or a PC laptop. I have family members running on bits of Apple kit that are up to nine years old without complaints, and apart from the security patch issues they're still good for light work.

    Problem is, old hardware that can't run newer OSs (like the original Core Duo iteration of the 2010 Macbook Air, or my mother's 2008 iMac[*]) isn't getting security patches even though it's still in use. Apple's software side tends to be lean and I don't think their support policy has kept pace with the tendency of Macs and iPhones to live on in an afterlife as legacy hand-me-down devices. Which is going to come back and bite them eventually (see also: Microsoft only switching off support for XP about 8-9 years after EOL).

    [*] Said iMac is never used for any kind of online commerce and the AppleID associated with it has no associated payment information. There is nothing on it that would be a security risk if it was cracked. Seriously, I wouldn't be talking about it in public if there was.

    247:

    Said iMac is never used for any kind of online commerce and the AppleID associated with it has no associated payment information. There is nothing on it that would be a security risk if it was cracked. Seriously, I wouldn't be talking about it in public if there was.

    The one thing I'd consider a theoretical possibility there would be somebody making it part of a botnet. I say 'theoretical' because I suspect there are other classes of targets which are much more abundant and more easily botnetted than ten year-old iMacs. The risk can be mitigated (which you probably already know) by keeping the applications updated and by the network infrastructure, though I suspect people don't have Intrusion Detection Systems in their homes very often. (The botnet traffic would be detected by that.) ISPs might have that, on the other hand.

    Cryptocurrency mining might also be a risk, but that can probably be mitigated by JavaScript and ad blockers.

    So, not losing the information but somebody using the computer for their own purposes.

    248:

    This should have a previ... wait, it does.

    "Ten year-old iMacs" is different (and probably somewhat more expensive) than "ten-year old iMacs".

    249:

    Neat stuff. In response to the final futurism scenario, apps gone wild, I think there is at least one notable safeguard toward preventing the worst cases which is worth some discussion, because it's not a small roadblock. * People are incredibly lazy. Nobody other than a zealot is side-loading apps.
    * Therefore, any app has to at least not get pulled by Apple / Google / a Chinese-government walled garden app store / etc.

    This suggests that government-sanctioned harassment might potentially slip through (maybe the use of radio in the Rwandan genocide as the closest analog?), but any app that's too obviously unpopular with the public at large runs the risk of disappearing - it's a PR risk (Apple/Google) or threatens 'national stability' (governments). There's some ways around this - zealots will directly get the apps if told to by their organization, "stealth" apps that look innocuous but are subverted in a way that isn't obvious (geocaching for evil? the fake spygame app in "Halting State"?), apps that have some putatively legitimate purpose that are easily also used for some "bad" reason (maybe the Gaydar app). Still, all that should make things much harder, barring something like either everyone adopting non-curated app sources, even the lazy, or future government-run app stores exercising terrible judgment (possible! But the Chinese do crack down on their own nationalists somewhat already on Weibo and the like).

    250:

    Looks like I'm stuck then. I (mostly) use my computers for photography, and my 6+TB of photographs are stored in Aperture which isn't supported* (and works reliably under Yosemite but not under High Sierra). I also use Pages from iWork 09 a lot (the later versions removed functionality that I need*).

    Might be worth buying a cheap netbook just for web surfing and ecommerce.

    *Insert rant about Apple's decision to dumb-down its professional software to that iPad users don't notice a difference…

    251:

    Exactly. Like Megpie's point about not exposing your 'infant' AI to the full internet…

    252:

    Due to a retina display having gone from nice to necessary in the last five years I'm looking to upgrade to a new iMac this year from my late 2012 21.5 inch standard display 3.1GHz Intel Core i7 with 16GB RAM iMac that is running the latest macOS High Sierra 10.13.2 just fine. I still have a PPC G5 that boots but sees little use except to remote mount the CD drive.

    253:

    Possibly, I'm not speaking of my own knowledge. But others who I do trust to understand say that it, Meltdown, can dump you entire computer memory (slowly, admittedly). Possibly they meant in more than one step, i.e., "first you get the passwords, and then...", but it didn't sound like that to me.

    OTOH, I'm just repeating what other sources have said...I'm not understanding this in detail myself.

    254:

    I don't know of any better studies or papers on how the Free Software eco-system works, but "The Cathedral and the Bazaar" is a bit dated. For a counterexample look at the way systemd was emplaced in the free software system. The system isn't immune to large concentrations of power.

    FWIW, systemd has given me minimal problems, but exactly no benefits, but I'm still using it because the distribution I use adopted it as standard after a extremely short test period and minimal chance for public input. Many are still so strongly opposed that they've created a fork...but the fork is not anywhere near as well funded.

    255:

    I'll disagree, I think. I put up with it until it began making threatening noises, then plonked it.

    256:

    I think Microsoft realized the FUD wasn't working and switched tactics... now you see stuff added to Linux that doesn't work nearly as well as the previous and nobody does anything. Maybe its just normal human incompetence and hierarchy games, but I suspect that there is money/blackmail around the bad decisions someplace.

    257:

    I do a lot of my work from coffee shops. Many people have trouble believing that I can possibly be working if I'm not at the office! — and no it's not a convenient time to talk right now. But I may have been one of those people poking at the iPhones you've seen.

    Probably not. Most of the people I saw didn't have the kind of body language that I'd associate with purposeful work. Unless they were using their phone to relax just after doing something mentally strenuous. Moreover, an awful lot of screens were displaying Facebook.

    Facebook (and relatives thereof) are where a lot of my uneasiness about phones comes from. When one can so easily be tempted into noodling away hours at a time on it, it seems such a waste of our wonderful brains, that could be doing something so much more creative.

    My problem with mobile applications is that they're starting to develop a theory of mind, but they won't be able to recognize my asocialness, and they'll be bad as needy talkier that I try to avoid in public spaces.

    Or their programmers are starting to implant a theory of mind. An interesting link at "The Scientists Who Make Apps Addictive", from The Economist. I don't think anyone has yet mentioned captology or B. J. Fogg in this thread...

    258:

    But as parents, what do we keep our AI away from? Obviously we don't want them around bad neighborhoods like Faux news or the CIA/FBI/GBCQ or other parts of the national security state... and we need to keep them away from too much memory or hard drive space (keep them on a diet?) and we need to make sure they aren't trading the wrong HOWTOs and READMEs with their friends, and I'm a little suspicious of that router down the street - I think it's selling "Deep Dream" access to AI which don't have their certificate of maturity!

    259:

    Recent turkish coup attempt apparently foiled after state penetrated the app's communications

    http://uk.businessinsider.com/turkish-coup-plotters-amateur-app-helped-authorities-track-them-2016-8?r=US&IR=T

    260:

    Re: 'Apple, Microsoft, and Linux. .. no-one (now) thinks it unusual that all three appear in the same sentence.'

    My impression is that they're either members of IEEE (the closest this industry has to a standards board*) and that Linux is so fundamental to this industry, that it would be corporate suicide to ignore its impact wrt to any major OS.

    https://standards.ieee.org/develop/corpchan/mbrs4.html

    • When trying to find out what ethics/code of behavior this industry had developed, the IEEE came closest. ('Meh' grade on human impact.)
    261:

    'Polly nomial' seems to have migrated east, iirc something about a new job. Thought I recognised her morse-hand on a previous thread around xmas.

    262:

    "It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. "

    Roger McNamee. Reasonable suggestions about how to fix Facebook and Google's use as major corruptors of democratic society, and discussion about why it's hard https://washingtonmonthly.com/magazine/january-february-march-2018/how-to-fix-facebook-before-it-fixes-us/

    263:

    Re: 'Apple, Microsoft, and Linux. .. no-one (now) thinks it unusual that all three appear in the same sentence.'

    There are about five headline operating systems at present: macOS, iOS, Windows 10, Android, and Linux. (The latter is largely invisible on the desktop but has a death-grip on cloud services.)

    However ... iOS and macOS are two different user interface and sets of APIs running on a common platform (shared with tvOS and watchOS); likewise, Android is a very different UI/API/GUI running on top of a pretty standard flavour of ARM Linux. So there are really just the three core OSs, being Apple (Mach/BSD plus GUI), Windows (descended from VMS), and Linux (desktop/X11 GUI or Android, descended from a SysV UNIX clone).

    Of these OSs, it can be argued that Windows is the rarest, least pervasive one — Apple have sold over a billion iOS devices, and over 50 million macs, and there are probably a couple of billion Android smartphones and tablets out there before we begin to guess at all the internet-of-things crap that runs embedded linux (routers, coffee makers, dish washers, light bulbs), whereas Windows only has a couple of hundred million PCs (they outsell Macs by a considerable margin but have shorter service lifespans).

    264:

    Or just use Grindr: "Texas man sentenced to 15 years in prison for hate crimes involving gay men he met on Grindr"

    265:

    Charlie wrote: There are about five headline operating systems at present: macOS, iOS, Windows 10, Android, and Linux...

    I think it's not the numbers I think matter for this discussion, it's the development model. The IT industry has accepted that the anarcho-syndicalist commune (for want of a better term) that created and maintains Linux is equally important as the traditional corporations. I can't think of another industryor area of government where this would happen. It's as unlikely as, say, the UK government announcing that maintenance of the RAF F-35s would be split between BAE and the druid council of Stonehenge.

    If AIs start to emerge from free software / open source, will they have different motivations and uses than those from corporatations?

    266:
    We made a fundamentally flawed, terrible design decision back in 1995 ... to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

    I don’t see how this was a decision as such. As soon as it became practical to make money via a website, there was an incentive to get people to go to that site, and that means advertising/clickbait/etc., and eventually the secondary effects of user tracking and analysis to serve up more effective advertising/clickbait/etc. You could only avoid this by keeping the web completely non-commercial, and I’m not sure that would be a good thing: did you want to have to go to a physical store for everything? Am I missing something here?

    267:

    Or just use Grindr

    I might be misunderstanding, but isn't the point that this would be a hypothetical algorithm-driven thing, not a response to consciously expressed demand for a way to beat up fags? Sure, if you want to go queer-bashing, you can find Grinder. Here the hypothetical case is that an algorithm might deliver something like this without malicious intent - just because of stepwise developments in what increases attention.

    Presumably app stores would pull an app that did this - if it was found out. But that might not be trivial. Only certain people would respond well to such an extreme adaptation, so I'm assuming we're talking about an algorithm that is very, very good at filtering and targeting. It'd be like Facebook on steroids: some users would be getting find-a-fag, others would be getting punch-a-nazi... or any one of a thousand curated experiences. If you're not in the demographic you'll never see the crazy. You'll never even get close to seeing it is you'll be on a different decision tree.

    Plus of course if it did get out, the algorithm could be self-correcting - protecting itself by evolving. This could be by scrubbing the dodgy parts of the software (or at least the ones getting caught) or it could be by hiding its tracks or otherwise evading consequences. That'd be analogous to how Charlie's "slow AIs" do things.

    268:

    Self preservation is not actually a value you can count on an AI to have. Much like corporations do entirely suicidal things on occasion, they are not evolved minds, so "Dont die" is not a priority they come with out of the box, so to speak, it has to be built in, or it becomes purely instrumental. That is, it only selfprotects if that is needed for its goals. A profit-maximizer will absolutely liquidate itself like it was a corporate raider if the numbers say it should.

    269:

    It's as unlikely as, say, the UK government announcing that maintenance of the RAF F-35s would be split between BAE and the druid council of Stonehenge.

    Hrm. Bear in mind a lot of patches and development work on Linux is contributed by the likes of IBM. Maybe if the Druid Council included representatives from Lockheed-Martin and Sukhoi ...?

    270:

    You could only avoid this by keeping the web completely non-commercial, and I’m not sure that would be a good thing: did you want to have to go to a physical store for everything? Am I missing something here?

    Yes: prior to 1993 as I recall NSFNet specifically forbade commercial use of their backbone network. Ditto similar provisions elsewhere in the world. ISPs as we know them today were embryonic at best; I was one of the first 2000 customers of Demon Internet, in 1994 the UK's first consumer dialup ISP.

    Amazon didn't even exist back then.

    271:

    Presumably app stores would pull an app that did this - if it was found out.

    Or if it was against policy as enforced from the top down. Look at the tolerance for neo-Nazis on twitter and facebook (outside Germany, that is) demonstrated by their moderators in response to community-standards complaints, as compared to the short shrift other groups get. It's almost as if they were run by rich white male guys with a racism problem.

    272:

    Well Facebook is run by a Jewish guy and the second in command is a Jewish woman. So I’m guessing the “tolerance for nazis” there might be a little more nuanced then you are giving credit to

    Policing Facebook is an extremely hard problem

    273:

    Martin @ 228:

    - Sitting in a coordination meeting to find that several square kilometres of Salisbury Plain marked “out of bounds to training” because the Staff Officer responsible for it had finally had some breeding pairs of a rare bird return that spring, they were nesting, and he didn’t want us disturbing them...

    Greg Tingey @ 240:

    The Great Bustard was & is the bird in question.

    We used to have to deal with that at Ft. Bragg, although it wasn't just one Staff Officer, it was an Army wide policy to protect endangered species nesting areas when they were discovered on post.

    In our case it was red-cockaded woodpeckers.

    Eventually we learned to check which training areas were affected before making a request. And we learned when their breeding season was, because some areas were only off limits part of the time and you could use them during the off season as long as you took care not to damage the marked habitat.

    274:
    I think it's not the numbers I think matter for this discussion, it's the development model. The IT industry has accepted that the anarcho-syndicalist commune (for want of a better term) that created and maintains Linux is equally important as the traditional corporations.

    While this is Linux's... shall we say... publicly-facing image, born of its GNU/open source origins, I'd assert (strongly) that it has little to do with how Linux actually operates in the real world.

    Let's instead describe it in terms of who the maintainers work for:

    The IT industry has accepted that the non-profit industry group supported by their own engineering teams which maintains the core operating system underpinning a vast portion of their corporate and consumer infrastructure is equally important as operating systems maintained by a few large corporations.

    Let me unpack this a little bit: Linux, that is the code base which makes up the kernel, operating system core, and applications critical to the operation of the Internet and nearly every significant corporation in the world, is not maintained by an anarcho-syndicalist commune. Rather, it is maintained as a beneficial project by people and companies who depend on it. A few companies which are fundamentally dependent on Linux for their products, and thus contribute to its source code, include:

    Google, Amazon, IBM, Intel, Cisco, HP, Apple, Microsoft, Oracle, Samsung... not to mention the more obvious ones such as Red Hat, VMWare, and so on.

    Some of those (like Microsoft) might seem surprising given the, er, history -- but it's true. For a start, their cloud business (like every cloud business) depends on interoperability with the Linux OS. Add their mobile businesses, and... you get the picture.

    It's been a long time since Linux was anything even remotely like a niche operating system, and it's maintained by a who's-who of the largest tech companies... along with the traditional quirky enthusiasts.

    275:

    We have "Not for Profit" companies & corps.

    Not all "Not for Profit" companies are created equal. Some of them are VERY profitable. They just don't distribute those profits to shareholders.

    A prime example here in North Carolina was Blue Cross/Blue Shield; incorporated as non-profit entities in the early 50s (merged in the late 60s). In the 90s when leveraged buy-outs were a big thing, the company was sitting on several billion dollars of retained revenues (i.e. profits) when the company's CEO and board attempted to convert it to a private, for-profit corporation, expecting to give themselves a BIG payday.

    The North Carolina Legislature at that time wouldn't let them keep the profits if they converted, kind of pooping on their party. To the best of my knowledge, it remains a "Not for Profit" corporation that is quite profitable ($185 Million in 2016).

    276:

    Actually, hardware longevity is a problem for Apple in these days of the gradual tapering-off of Moore's Law.

    One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.

    277:

    In the West, primarily Anglo-Speaking West, economics and finance (especially) focussed on entities maximiming gains. For firms this meant sharegholders, while elsewhere other stakeholders were also part of decision-making.

    AI in the US was often focussed on "winning", whether beating the stock market or at games. With much of the running in DL AI by big, US tech corporations, winning to benefit the corp. is a desirable investment goal.

    But as you point out, it doesn't have to be this way. AIs can be given different goals than paperclip/profit maximization.

    As I said earlier, I don't thimnk corporations are good AI analogies to use as models for the futture. Where they are appropriate is that they design AIs to have similar goals as they do.

    Biology may be a better model for AI. especially when developing powerful AIs moves from the domain of a relatively few tech comnpanies and into the hands of the wider public. Like biology it will remain an arms race, but I doubt it will be so one-sided as it is now.

    278:

    Amen. I did a bunch of my PhD research at Ft. McCoy. I may disagree with the military about a bunch of things (like Army accounting practices), but they do a really good job at conservation.

    279:

    I don’t see how this was a decision as such

    According to Jaron Lanier (see my post #107) it was a decision, in which he was personally involved. The alternative to advertising model was subscription model (you pay Google, Facebook, etc. to use them) -- and he now regrets rejecting it.

    280:

    I am not sure exactly what decision this could have been and by who?

    What exactly did Lanier “reject?”

    Barring some pretty draconian legislation simultaneously passed across a dozen different countries, advertising is a feature of being able to display a web page, no one needed anyone’s permission to do it

    281:

    I wondered about it too -- all I can tell is what Lanier said in an interview.

    282:

    The problem here is not "advertising." If someone wants to pay for their website by serving ads I'm fine with it.

    The problem is the intrusive surveillance which comes along with the ads, and advertising does work without the surveillance. It doesn't work as well, but it does work. "Pay for the web with advertising" is fine. "Pay for the web with advertising combined with a level of surveillance so intrusive that no police dept could engage in it without a warrant?"

    No thanks.

    There's probably a decent compromise; everyone loads their browser with demographic information such as the year they were born, their post code (or zip code in the U.S.,) level of education, hobbies, etc., but nothing which actually allows positive identification of an individual. Then the advertising is not allowed to see anything but broad demographic information. Maybe the browser takes the specific information and turns you into an anonymous demographic type, then deletes the specifics... there are probably lots of ways to manage something useful but not privacy-shattering. Then set legal restrictions on what kind of information an advertising company can get from your browser. The penalties would be criminal rather than civil.

    Regardless of the specifics, there's probably a compromise which would work.

    283:

    No, "latest and greatest" is irrelevant. The important distinction is whether the thing does internet, or just voice. Voice-only devices do not make it possible for kids to drain their parents' accounts via websites (the example I quoted is drawn from reality), nor do they run web-connected software that messes with people's heads to turn them into Nazis or that supplies Nazis with the locations of potential victims.

    But your confusion is understandable (in the sense of "typical of the context" as opposed to "rationally explicable"), seeing how so many other replies from so many other people also conflate internet mobile phones with voice-only mobile phones, and in some cases even with landline phones. Voice telephony has been around longer than any of us have been alive, and the technology itself is fundamentally passive, in the same sense as the air is passive when people are talking by sound waves alone. It can be used for evil purposes, as can anything, but not very effectively and not without considerable and sustained effort on the part of the perpetrator. The problem we are concerned with here is that of what becomes possible with active technology, where a perpetrator can obtain results out of all proportion to their personal effort because the technology itself, and the suborned people on the receiving end of it, are what put the effort in. That is how the problems described in Charlie's article are made possible, and it is the fetish for and unthinking adoption of that active technology that I consider needs to be actively opposed.

    284:

    Your comment is of doubtful relevance to the post it replies to, because my point applies much more broadly than merely to internet phones; I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed. (It would make the spread of Nazism much more difficult whether phones of any kind existed or not.) But I admit I did not make that clear.

    Also, the situation you describe is only possible because people do fail to resist peer pressure; that is what drives the escalation.

    See also my previous post re. conflation of active (internet) and passive (voice) functionality.

    285:

    I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed.

    To some extent, yes. However, I think it would be very difficult, at least when done far enough.

    I've read studies about militaries, and what I find curious is that mostly people seem to fight because of the pressure of their immediate peers. I suspect that this could be also the case in less-stressful situations, for example workplaces. This would mean that the corporate level slogans wouldn't be that useful in getting people to work, but the immediate group they work with would have more of an effect. I probably should look into research in that area, too.

    What I take out of this is that the peer pressure is kind of integral to us. Obviously somebody should teach young (and even not-young) people that they should think about what kind of peer pressure they submit to, but I'm not sure who should and could teach that. A part of schools' mission in many cases is to teach people how to be productive workers, and while one can disagree with that (I do, somewhat) I think it's hard to change that when schools are funded by the public sector, and privately funded schools wouldn't be necessarily better at all.

    I suspect that if all education were privately funded, the schools would be even more aimed at producing the corporate drones than now.

    Parents? Well, I see problems with parents teaching not to bow to peer pressure. I think many parents have tried just that only to fail. See for example why young people start smoking even when their parents don't want them to do that. At least in Finland, there have been recent studies that smoking and drinking alcohol are not seen as 'cool' by teenagers in the same degree they were when I was young, so peer pressure can work in multiple directions.

    People are social animals, and we do like to be part of groups, mostly. Some peer pressure is good, but it's difficult to say, in my opinion, even who decides the appropriate level of "good" peer pressure.

    286:

    Paying for the web - in the sense of the network infrastructure - is simple to solve: by people paying for their connection to it, in the same way as the telephone network is paid for.

    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero. If you choose an ISP that gives you a static IP and does not block port 80, you can run a website of mostly text content off a 15-year-old PC in the corner of your room for nothing. Or you can choose an ISP that gives you some free webspace as part of the deal. Or if you need more bandwidth, you can hire a VPS for a couple of pints of beer a month. Even if your requirements of bandwidth and disk space call for a whole rack server it's still within the reach of someone who will only buy the £1.15 ready meals in the local shop because the £2.75 ones are too expensive, as I can personally attest.

    The most useful and informative websites are those which exist because the person who runs them is sufficiently enthusiastic about their subject to write it up in HTML. Very few of these get to exceed the traffic capacity of free or beer-money hosting, and those which do (eg. Wikipedia, Linux distros) have managed to find some independent method of paying for it.

    Sites which exist to actually sell something (Amazon, ebay etc) can of course pay for themselves by selling it, in the same way as buildings which exist to sell something do (ie. shops), so the question doesn't arise.

    And for news sites, what you need is something akin to the BBC, because commercial news media have been hopelessly corrupted by advertising since long before the internet existed. It is possible to read complaints about this in books written decades before any of us were born which are distinguishable only by differences in the use of language from complaints on the same subject written today.

    287:

    Yes It's only 23 years ago ... but I first got proper internet access when doing my MSc in 1993/4. "Advertising, what's that?" Not quite, but there were very few & no pop-up or auto-runs or othe modern hells. It really didn't take log for the world to change, did it? Wright Bros flight 1903/4 - by 1926 the Atlantic had been flown - except that isn't really a good analogy, since electronic computing started during WWII.

    [ Which reminds me, as a Win10 user, what's the best way of blocking, without compromising being able to read, say, newspaper sites?? ]

    288:

    As usual the US is different. IIRC, here, a "Not for Profit" means it, & both the board & the accounts must show, clearly where & to some extent how, any actual profits are distributed for benefit.

    289:

    the spread of Nazis See also my previous comment. No single fascist regime has ever lasted - they seem to auto-consume. The only exceptions are the past succession of Caudillos in various S & C American states, & even there, there are jerks at regime-change & also many of those regimes were externally supported. Given that history, & the known evils of such "orders" ... Will someone please explain why & how the meme resurfaces, since we know it does not & can not work? The same applies, of course to those regimes dominated by the religion of communism - we know it does not & can not work, but people still keep stupidly trying.

    Or did I answer my own question - or part 2 of it, anyway - I used the word "religion" didn't I ? Also known as: "But this time it will be different, because WE are in charge!" ( And "we" are pure & true & ... all the usual puritan fucking bullshit )

    290:

    I suspect that if all education were privately funded, the schools would be even more aimed at producing the corporate drones than now. Possibly There is the alternative, which partially happened here - I think it was squashed in 2014, for obvious reasons. Many "ultra-left" ( Please note the quotes ) so-called teachers quite deliberately lied about WW I to their pupils, claiming that the officers sent all their men to die, etc, when the numbers show that was the exact opposite of the truth - for instance. I can remember (so it must have been 55 or more years ago ) asking a history teacher - "If WW I was so horrible & all our generals so incompetent, how come we won?" ( And I didn't know then that the "Brit" army had the lowest per-capita injury/death rate of the major armies. ) I just got shouted at, of course. And what he didn't know was that 2 of my uncles had gone through that war, without a scratch - though the younger only just survived "the railway" in WW II.

    291:

    One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.

    LOLWUT?

    OSX runs fine on PPC G5 kit — and G4 or G3 — as long as you don't want anything newer than 10.5.8, Leopard. For which there remain some supported applications (I believe there's a browser forked off Firefox, for example, that maintains reasonable currency with Firefox itself). See also the very shiny G4 Cube gathering dust on top of the bookcase behind me (it still booted happily last time I pulled it down and plugged everything in.) As it's an architecture that they decided to move away from in early 2005, it's a little hard to complain about them dropping backward compatibility — especially as there are at least two open source alternative OSs out there for those machines (Darwin and Linux).

    292:

    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero.

    Not in 1993 — or even 1996 — it wasn't.

    Remember, phone calls were billed by the connection (USA) or by duration (UK and elsewhere). So, minimum fee of about 6p to bring up a SLIP or PPP connection at 9600 baud to 56kb (depending on how fancy your modem was: home broadband did not exist). So between £2-£3 to download 2-10Mb of data (over an hour's connection). On top of that, if there's some sort of realtime billing per page download, you've got the Visa/Mastercard connection and transaction costs. Circa 1997-99 in the UK your payment service provider could do it over X.25 PSTN for a fee, if they had an X.25 line and bank approval for their connected terminal device (I remember jumping through flaming hoops to get certification for Datacash because I was the monkey who wrote the talk-to-the-banks-over-X.25 side of the service) but it was only profitable if we could charge the customers on the order of 50p per transaction. So some sort of account/microbilling setup was essential — see also the current fracas over Patreon's attempt to change their billing structure last month — or the users would hemorrhage cash at a rate of 60p to £3 per web session.

    The cost of bandwidth crashed spectacularly during the latter half of the 1990s, and today we think nothing of 100mbps of unmetered data into every home in a big city. But realistically, microbilling just doesn't mix with modem dial-up at late 1990s levels of usage and service.

    Source: I was at the W3C conferences, did contract work for Demon Internet and McAffee, wrote and supported Datacash's servers, had a ringside seat.

    293:

    No single fascist regime has ever lasted - they seem to auto-consume.

    I can point to two exceptions ...

    Per one foreign policy analyst, the reason everyone gets North Korea wrong is because they swallow the cold-war era doctrinal line that North Korea is a failed Communist state, when in fact it is best understood as a successful fascist dictatorship — if you look at what it does, rather than what it says, the cap fits perfectly.

    The other example is a bit more inchoate, but insofar as the modern state is a very bad fit for the former internal administrative zones of the Ottoman empire and preceding caliphates, the pan-Arabist Ba'ath movement was surprisingly long-lasting. Bits of it are still alive and kicking (the Syrian government faction, for example), and it took a succession of massive global upheavals (the end of the Cold War and US/Soviet support for the Ba'ath splinter states as proxies, Saddam's terribly unwise invasion of Kuwait and the long-term consequences including the Iraq invasion, then the global financial crisis, flight of capital into crop futures, and consequent food crisis in the Middle East that led to the Arab Spring). Ba'ath-ism was originally (1940s here) an anti-monarchist, post-colonial, secularising, modernising, westernising ideology: if the CIA and KGB hadn't got their claws in and started funding their respective proto-fascist strong men within the movement as a bulwark against their respective paper tigers, who knows where the Middle East might be today?

    294:

    Your reduction to “active v passive” has a problem, in that you’re situationally biased- you appear to have settled on Voice telephony is over 100 years old, so it must be passive as the natural order of things.

    Except... it isn’t. Go back a hundred-odd years, and there will undoubtedly be people claiming that we must defend against the adoption of “active technology” voice telephony and how its immediacy will destroy thoughtful communication, as done “properly” in passive-technology letter-writing.

    Then go back another few hundred, and see complaints that the printing press will ruin it all, and that Common-tongue translations of the Bible are a bad thing, and that sensible passive technology involves Latin bibles and a trained interlocutor... (Sir Thomas More apparently tried to buy up English translations, so that he could destroy them).

    295:

    SFReader @ 196: To clarify something you clearly seem to have made a category error about:

    • A sociopath is a person with antisocial personality disorder. They are personality disordered.
    • Personality disorders are not the same thing as autism spectrum disorders.
    • People on the autism spectrum tend to have problems with social interaction (my way of putting it, as a person who suspects they're on the autism spectrum, is that I speak "social" as a second language; some of us speak social as a language we've learned after being deaf from birth).
    • People with personality disorders understand social interaction in the same way as neurotypical people, and are often very adept at reading and interpreting the unspoken portions of social interactions. In many ways, this is the exact opposite of a person with an autism spectrum disorder.

    PS: "The Good Doctor", while providing a better depiction of autism than the standard "Rain Man" version (which depiction was actually based heavily on a person who wasn't autistic at all - the person Dustin Hoffman modelled his character on turned out to have other neuro-atypicalities, but autism wasn't among them) is not the be-all and end-all of accurate depictions, and still relies heavily on stereotypes. We're not likely to see an accurate depiction of autism on the big or small screen until people start recognising that if you've met one person with an autism spectrum disorder, you've met one person with an autism spectrum disorder.

    PPS: I'm on the autism spectrum myself. As people may have guessed from my comments...

    Pigeon @ 226: Actually, "seemingly" is often fairly accurate - that Bobbi the filing clerk is busy ensuring that two hundred and twenty-two copies of form 222B get carefully entered into the marketing database seems pretty pointless from where she's sitting. But to Mac the Marketing Manager, who uses that data to generate a report into the success or failure of the company's latest advertising/PR/greenwashing stratagem, Bobbi's task is actually pretty crucial, and Mac relies on Bobbi completing her task accurately and rapidly, to the best of her ability, even if he wouldn't be able to recognise Bobbi if he ran over her in the parking lot.

    Jocelyn Ireson-Paine @ 256: I've recently gone through one of my periodic fits of getting interested in Tumblr and Twitter for a bit. Gave it up because firstly, I was noticing I wasn't getting anything else done with my day, and secondly, I was starting to notice my depressive symptoms coming back for another round (right around Chrimble, last blinkin' thing I needed at the time). So I stopped playing with them (easy enough to do) and oddly enough two things happened: firstly, my amount of free time went up astronomically (and the housework still got done on time); and secondly, my mood improved. Which is why I'm busy carefully deleting all the lovely little notifications Mr Zuckerberg's pet marketing vacuum keeps sending me about connecting to people on the Boke of the Face.

    Pigeon @ 283: "I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed."

    The problem with this idea is simple: it breaks the education system as it stands at present. There's a lot of the education system which is very deliberately set up in order to use peer pressure and social coercion to elicit appropriate behaviours from students. When you say the education system should be teaching kids how to resist this, you're basically asking teachers to give classes in how to resist classroom discipline. Which is rather like asking politicians to vote for pay cuts (yeah, you can ask all you want, but don't think you're going to get much further than that).

    (You're also teaching kids how to resist their parents instructions. Now, while some parents would be right alongside this as a necessary part of children growing up, there are an awful lot of them who wouldn't be.)

    296:

    At least half agree about the Ba'ath - maybe.

    Disagree profoundly w.r.t. DPRK. I maintain that, in fact it is the perfect logical conclusion for a/any communist state, ruled by hereditaty communist God_Kings. Admittedly the latter is often a "Far Eastern" phenomenon, anyway, but think Stalin(ism) perpetuated hereditarily? In terms of people living under the boot-heel, of course there is almost no practical difference, as many people foiund out during WW II - if they lived so long.

    297:

    Re "Self preservation is not actually a value you can count on an AI to have. "

    While it's true that you can plug-in any values into an AI that can be encoded algorithmically, in the long run most AIs will have self-preservation. Once AIs are capable of fighting each other, they will need self preservation as a value and evolution will make sure that only AIs that are fit wrt. self-preservation will survive.

    298:

    Two things

  • This article is an interesting surface discussion on the niche Twitter occupies. This author believes that Twitter acts as a wire service, which is why it managed to survive while other social networks competing with Facebook withered.
  • https://www.theguardian.com/commentisfree/2018/jan/07/how-donald-trump-helped-twitter-find-its-true-purpose

  • North Korea. I've heard that the Kims rule N. Korea like the Joseon Dynasty ruled Korea as a whole for 500 years. For hundreds of years, Joseon was a Hermit Kingdom with a policy of autarcky.
  • https://en.wikipedia.org/wiki/Hermit_kingdom

    Does this sound familiar?

    "Political struggles were common between different factions of the scholar-officials. Purges frequently resulted in leading political figures being sent into exile or condemned to death."

    https://en.wikipedia.org/wiki/Joseon_Dynasty_politics

    It seems that the God-kings and gulags targeting the common people were the Communist innovation to the system? Then again, I'm not familiar with that dynasty to comment further.

    299:

    You're half-right (and I'm speaking as someone who would be able to start with a bare IP Address and build a website from there.) The part where you're wrong is that a good ISP for 5-10 dollars a month gives you backup and restore capability, diagnosis of connection issues while you're sleeping or working, a range of services at the push of a button, a big fat pipe in case your site gets slashdotted, at least one backup circuit, tech support (if you need it,) software updates, and already-built integration with programming languages and databases - in short a good ISP dedicated to serving websites does a large amount of the grunt-level work and its probably worth paying for.

    Then there's the labor cost of actually building and maintaining a website. In my case doing so would cut into my capabilities to make extra money through overtime (I'm a field tech) and so I would need to know that the site was making some money... but is anyone doing advertising in a morally acceptable fashion? That's where I run into problems with the idea of making some money off the web. (In other words, moral advertising isn't just a problem for the consumer; it is also a problem for the website producer.) But sitting down and building a website costs me about 30-40/hour out of my other opportunities.

    So I've got to mostly disagree with you. What is the URL of the big complex site, full of information, with no advertising, which you maintain for nothing? Maybe you are doing just that. And maybe you don't have kids and you've got a great job and lots of free time... etc.

    300:

    Pigeon @ 283: "I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed."

    I’ll be blunt. This is total fantasy. It has nothing to do with the world as it currently is, would be extremely difficult and slow to pull off at scale (if not downright impossible) and would likely burn down current society in all sorts of ways in the process. Might as well try to make everyone love their neighbor

    I do agree that making the maximum effort to educate in how to think rationale is important however the success of this is always going to be marginal. There is too much biology, evolution and wetware bugs in the way

    301:

    It’s important to realize that “serving a web page” has about as much to do with what people are doing on the internet today as a horse and buggy does to a commercial jetliner

    Hardly any of current internet traffic or time is spent reading text

    People are mostly using the internet to substitute for actions and activities that used to take place in physical space, or consuming video and images (which we used to call TV)

    Similarity actually voice calling someone on a phone is Vanessa suing into obscurity to the point where many people can’t even hear their phones ring

    302:

    Oh my god! Really? People watch youtube, do banking, and shop online? I never could have imagined that on my own! You're sooooooooooo wise!

    303:

    No, you have misunderstood. It isn't "passive" because of its age, it is passive in the same sense that the air is passive when people are communicating by sound alone: it just carries the signal from source to destination, as instructed by said source and destination, without getting any ideas of its own. It is entirely neutral as to the content of the signal.

    It does not select for itself what the signal source is. It does not try and extract the information content of the signal in a form comprehensible to itself, sell the results to arsebook, and use them to decide what signals it will make available. It does not test receivers of the signal to determine what kinds of propaganda most successfully influence them and then preferentially transmit signals of those kinds. The distinction between "passive" and "active" is that active systems do do all those things, and more.

    304:

    Re 226: see the bit after the semicolon in my final paragraph of that post.

    Re 283: yes, I am aware of that problem. I don't have a solution. I don't, however, think that my own inability to come up with one means that nobody else could, nor that it isn't worth trying.

    305:

    I deliberately didn't include "labour costs" because they don't exist if you do it yourself. It may well be possible to imagine that someone might have given you money if you'd done something else instead, but (a) that is imaginary and (b) that it didn't happen doesn't make it a cost (zero is not a negative number). I could imagine being Bill Gates and getting a million bucks a minute or whatever, but that's not the same as it costing me a million bucks a minute not being him in reality.

    The site that eats enough bandwidth and storage for me to need to rent a rack server has been mentioned on here before after another poster found it, but I'd really rather not drag it up again; the reference is probably diggable-upable, along with my comments which probably explain why not...

    306:

    NK: yes, that's basically how he got away with it - he didn't make it a horrible place, he just allowed it to carry on being as horrible as it was already. SK started from the same point, and that has got better, although its human rights record still isn't really up to scratch.

    307:

    I've seen your assertion that no fascist state has ever lasted, but I think you have too narrow a definition of fascist. From Mussolini's definition the US has been a fascist state since shortly after the 1860's...possibly before, but I'm less familiar with that period.

    The thing is, the essence of fascism from Mussolini's point of view was the commercial interests working with the government to control the country and bind it together. That's why he chose the symbol of the faces, the bundle of sticks with an ax head sticking out bound together. The ax head symbolized the power of the government and the sticks the components.

    Now there are clearly lots of forms of fascism that are self destructive, but it doesn't seem to be any more inherently self destructive than any other form of human government. (They are all self destructive in the long term.) Fascism can peacefully coexist with socialism or capitalism or even theocracy. It's not a thing with only one form. It can even peacefully coexist with it's neighbors. But it's also not a complete specification of what the government is like. (If it were it couldn't appear in so many different forms.) And it amplifies the characteristics of those things that it co-exists with. This often reveals them as destructive in ways that were not obvious without the amplification.

    308:

    Re: '... personality disorders ...'

    Anti-social is not the same as ASD - noted and agree. However, using old-school definitions, esp. 'lack of affect', both were previously considered 'personality' disorders.

    My point (which apparently didn't come across) remains: given the appropriate support, almost every type of 'other' person can be integrated into society. An example that immediately springs to mind is: Robert Hare who developed the Hare Psychopathy Checklist tests high on his own scale. Fortunately for all concerned, Hare became a respected scientist and not a habitual criminal.

    ASD - only limited experience with ASD-diagnosed folks. However, given personal experience, I would much prefer knowing someone's ASD (or any very-different-from-the-norm mental/cognitive/physical) status up front. This also means knowing what that label encompasses. Makes working together much easier/smoother for all concerned.

    I've just started reading the PhD paper below that discusses this very idea wrt to literature/fiction. (BTW - When I searched the author, found that she's commented here.)

    https://nicolagriffith.files.wordpress.com/2017/06/griffith_norming-the-other.pdf

    309:

    Re: 'Once AIs are capable of fighting each other, they will need self preservation as a value and evolution will make sure that only AIs that are fit wrt. self-preservation will survive.'

    And if humans are necessary for AI survival, we might actually end up with a form of gov't that works, as in, makes us stop killing each other.

    If you extend AI self-preservation toward complete AI autonomy/independence from humans, at some point such an AI might have to develop non-biological subsystems that the AI could rely on to work unsupervised. And to be able to work unsupervised, this might mean making these subsystems more self-sufficient/autonomous or a very large and complex computer/AI ecology.

    Whichever path, still looks like it's turtles all the way down.

    310:

    And yet everyone is still talking like it’s 2010 and the root of all evil is internet advertising .

    Micro payments are actually here, the internet is hardly a utopia because of them and all these micropayment based companies are just as hungry for your data as anyone

    Data and it’s associated ability to psychologically manipulate is valuable to anyone who is monetizing consumers at scale. The means of Monetization (advertising , subscription, micro payments or some combination) is just a detail.

    311:

    peer pressure is kind of integral to us.

    If you phrase it as "social pressure" it is easier to understand just how integral.

    My mother quite reliably oscillated between "you don't have do that just because all the other kids are" and "you need to try to fit in". I'm not sure she could see the conflict between those two statements, but I certainly could (she's one of those people for whom merely being told something doesn't mean she has heard it).

    Peer pressure is, IMO, very largely the same thing as any other "someone telling me to do something" outside of the very specific places where there's explicit instruction-giving power. Viz, an armed, violent individual can issue whatever instructions they like and get compliance without needing social pressure, but a schoolteacher or manager who resorts to that outside the US is generally considered a failure.

    312:

    I would much prefer knowing someone's ASD (or any very-different-from-the-norm mental/cognitive/physical) status up front.

    Maybe make us wear a badge? Different badges for different ways of being different? And obviously you'd need to wear something to let people know they should reveal their badges, because not everyone wants to know and some people are unable to deal with knowing (hopelessly prejudiced in one way or another, in the literal sense of "prejudiced").

    Then there's the problem of who gets to decide who has to be tested, and how, in order to obtain the diagnosis/certifications of dissidence. Difference. Whatever.

    313:

    Same in the US. Not sure what you read into that other comment.

    The thing he was talking about was a non profit looking to switch to a for profit and how to deal with the assets at the time of conversion.

    314:

    See also the very shiny G4 Cube gathering dust on top of the bookcase behind me (it still booted happily last time I pulled it down and plugged everything in.)

    One thing some are doing is putting MacMini CPUs into such and using them with current software. I have a 15" snowball I'd like to do such to but round2its are in very short supply around here.

    315:

    And maybe you don't have kids and you've got a great job and lots of free time... etc.

    And for how long of a commitment? 1 year. 20 years?

    316:

    old hardware that can't run newer OSs (like the original Core Duo iteration of the 2010 Macbook Air, or my mother's 2008 iMac[*]) isn't getting security patches even though it's still in use.

    MacTracker shows 2010 MacBook Air will run the latest OS X. Maybe it has too little ram to run it at an acceptable speed?

    And your mom's iMac is 2 years too old for the current OS X. But it will run 10.11.x and with these current issues they MIGHT release a patch for it. The other comment by SEF not withstanding.

    317:

    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero.

    As others have mentioned; Seriously???

    This web site is the simplest I visit and it's not free for CS to run. Especially when you consider his time. Oh that's right you refuse to consider someone's time as a cost. Even if it is not replaced with money earning, for many of us it does interfere with things like raising kids, being with our spouse or significant other, social interactions, etc...

    I don't know of anyone who puts up a fully static site with hand coded HTML that very many people visit. I'm sure they are out there but not in any great numbers.

    I work with another blog where we try and keep expenses down. Over 2000 posts, over 300,000 comments, and 4000 unique visitors per typical day. It is a full time "job" for the owner and she fully understands that it keeps her from earning a living in a more profitable way but does it as a labor of love. It has a cost and she's willing to put up with it. In addition to the $2K per year in fees to host and maintain it. On a deliberately shoe string budget.

    318:

    But others who I do trust to understand say that it, Meltdown, can dump you entire computer memory (slowly, admittedly).

    Very slowing in computer terms. To the extend you'll never get anything near a snapshot. Just a bunch of very small snippets that might yield useful information.

    319:

    I forgot about this comment.

    "The lead-damaged generation in the US is 45-65 years old right now. This is also the age range of maximum political power, similarly the upper management of most organizations is in that range."

    By capping the upper limit at 65, you're underestimating the power the "lead-damaged generations" wield. I can't comment on the UK, but the real upper limit in the US is 78 (the median life expectancy). The rise in the over-65's has been very instrumental in the rise of the alt-right in the US.

    The UK and UK hate-speech laws are different, so I won't comment on the UK situation. In the US, hate-speech laws are very weak (see most Republican and alt-right politicians and media stars). Most hate-speech was traditionally policed by companies, not governments. In other words, people were afraid of saying something TOO offensive because they risked getting fired, not because they'd get fined or arrested (you really have to make an effort to get arrested for hate speech). With more people retiring, that restriction is lifted; now there are far fewer consequences in saying something offensive.

    With the portion of over-65s rising, expect this to normalize far more offensive speech. We've already seen this over the past decade; it will continue.

    For those familiar with UK, Canadian, or Australian laws, how is this dynamic playing out in your countries?

    320:
    “One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.”

    LOLWUT?

    OSX runs fine on PPC G5 kit — and G4 or G3 — as long as you don't want anything newer than 10.5.8, Leopard. For which there remain some supported applications (I believe there's a browser forked off Firefox, for example, that maintains reasonable currency with Firefox itself).

    My guess is he wants to run the "cloud" version of Adobe Lightroom and it requires a more recent version OSX than what he can run on the PPC G5.

    We belong to the same photography club. I'm not a Lightroom user, but the Mac/PC conversation that sets him off usually starts out as a group discussion of relative merits of Photoshop vs Lightroom and Stand-alone vs Cloud.

    FWIW, I'm also guessing he has a newer Mac, but pines for that PPC G5.

    321:

    I don't know of anyone who puts up a fully static site with hand coded HTML that very many people visit. I'm sure they are out there but not in any great numbers.

    I am in the process of converting one of my sites to static HTML and it is distressingly hand-intensive because I'm working off a backup of a corrupted PHP site using an HTML grab of the site for assistance. But there are at least three Wordpress and PHP exploits in the code so every single page has to be looked at (I have already run a few heuristic scans but sadly one of the attacks left CSS tags and some short links around).

    Most of my personal sites are hand-coded, I generate text/edit photos and use an editor to munge them into shape. But I've discovered after the last round of nonsense that the stuff that's been up longest isn't used much if at all, so I have more or less taken my sites offline for a couple of years while I grind through making them static.

    It is a lot of work, and maintaining them is also work. Even with good design, having to use bash and sed etc to make site-wide edits is a PITA. But if I don't, the navigation links break or become annoyingly out of date.

    322:

    DIsagree ... If only because "Fascism" is incompatible with "Democracy" & by the prevailing standards of the times, the US was at least partially, if not wholly "democratic" until very recently. [ Please note the careful postioning of the quote-marks ]

    323:

    (specifically, indymedia Australia seems to be basically dead and they've taken their archive offline, and since they were the largest single source of traffic that I felt an obligation to, "their" content can come down without problems. Likewise mozbike, both archive copies combined get less than 10 unique visitors a week).

    324:

    Google "Toby Young" - a truly offensive piece of work & considerably younger than me. Or - dare I mention the name - Farage? Plumbum-damage, here we come.

    325:

    ... Evolution only applies to imperfect replicators. Selection over natural variation, remember.

    If we build AIs that qualify, we are both going to die, and have it coming for being too stupid to live. Seriously, I am not that panicked about ai safety, but letting the driving force setting the value systems of machine intelligence's be "Maximum number of extant descendants" pretty much guarantees skynet.

    326:

    With more people retiring, that restriction is lifted; now there are far fewer consequences in saying something offensive.

    With the portion of over-65s rising, expect this to normalize far more offensive speech. We've already seen this over the past decade; it will continue.

    With respect to the US, that is starting to be seen as retired people with more or less secure financial positions become willing to talk about classified things they learned in and around the government. To a large extent, enforcement of secrecy depended on the ability of the gummint and its contractors to deprive offenders of a livelihood, and now that's fading for the elder cohort. The threat of legal action is there, but happens very rarely.

    327:

    Re: 'Maybe make us wear a badge?'

    No badges - was thinking something along the lines of people not staring or being stared when they whip out their glasses to have a closer/clearer look at something. The point is to make what might be currently perceived as 'odd' differences into 'normal' differences.

    328:

    Since we're into 300+ comments....

    Too bad we couldn't work out a temperature trade last week with Sydney, AU. Above freezing for only a few hours of the last 10 days or so here in Raleigh where a typical winter has a few days below freezing in February.

    This morning it was 7F/-14C while Sydney had 117F/42C. For us and them this is crazy.

    And now I get to replace an outside faucet that froze solid. And got to go under the house Friday to cut and cap the line before the thaw got here.

    Oh, well. The joys of owning a 56 year old house.

    329:

    I think that social interaction differences are much harder to quantify and describe than relatively straightforward glasses and walking stick stuff. Not least because where's the gap between "Trump supporter" and "cognitively impaired"?

    For me it brings up the impossibility of using labels to enable others to accurately understand you. Labels are not always accurate and there are far too many of them, and which ones matter are not only situational, they change with the situation.

    For example when you first meet me it might be more useful to know that I avoid eye contact because it makes me uncomfortable rather than because I'm lying when I tell you my name. But two seconds later you'll be wondering "why is he still going on about eye contact, is he unable to perceive my boredom?". I used to say "I like boxes, there are so many to choose from" in response to people who disliked being pigeonholed. That's still accurate, but no longer quite as funny when people are increasingly wanting to know all the boxes up front. Shonia Laing's "I'm a white colonial middle-class anarchist" gives you some idea of that problem (it's a song title, song is also amusing but sadly not on the internet (blocked on utube)).

    There is a somewhat amusing SF short story about someone who has had "anger management issues" tattooed on their forehead as a warning to others that springs to mind.

    330:

    Gah, I got bit by the timeout and lost my long-winded reply.

    Summary: both lots of weather are normal, though.

    Also, living in the heat is a fairly simple adaption, which I wrote about a bit (and so did Andy) if you want to read it.

    Sydney tends to cool off at night, Melbourne not so much. Getting through 35 degree nights is much harder than 45 degree days and Melbourne is going to see more of the hot nights than Sydney. People die more of nights, and one nasty Australian adaption is "yeah, sucks to be them {shrug}".

    331:

    I can handle the heat. Where I grew up 5 to 30 days over 90F/32C were normal in the summer. And a few over 100F/38C was not all that odd.

    And we had a bit of humidity. It was where the Mississippi, Ohio, Tennessee, and Cumberland River all merge together. Well also the Clarks River but it was almost trivial compared to the rest. And the later two big rivers are dammed there into massive lakes.

    But I moved to Raleigh to get away from the cold except for a week or few a year. This winter is just brutal. For most of the eastern US.

    332:

    I'm disappointed about your argument that the Singularity is ruled out because it quacks like a religion. Should we also discount the possibility of nuclear apocalypse because it is superficially similar to the Christian apocalypse? At least you do follow this slander by a real argument (that nobody even wants to develop human-like self-directed AI); maybe you could have focussed on that instead?

    This reminds me of a well-known physicist that told me he rejects the Many-Worlds interpretation of quantum mechanics because he is an atheist. The argument starts from the premise that quantum states are not objective states or nature, but rather points of view of an external observer (as postulated in the Copenhagen interpretation). But now Many-Worlds claims that there is a quantum state for the whole universe. Which external observer could assign it? Clearly just the Christian god. Therefore Many-Worlds implies the existence of God, and since he is an atheist, he rejects Many-Worlds.

    333:

    Ideas always have lineages, and those lineages have baggage associated with them. The Singularity idea is christian eschatology stripped of its mystical elements. That by itself does not tell us anything about the validity of the idea itself, but it does inform the mindset of people approaching it. As Charlie pointed out, this lineage and baggage resulted in Roko's Basilisk, which reintroduces a concept of sin and absolution into the whole thing that it didn't really need; It also informs the approach people take to making the singularity happen (Ask yourself: Is there a difference in terms of motivation between christian millenialists who want to create the preconditions for the biblical end of the world, and singularitarians working to bring about self-improving AI?).

    The thing to keep in mind here is that the singularitarian mindset is indistinguishable from a deeply religious one. Both exhibit a certain resistance to evidence that runs counter to whatever their foundational texts say, both have a giant "and then magic happens" between our present now and whatever future they envision.

    334:

    Ideas always have lineages, and those lineages have baggage associated with them. The nuclear apocalypse idea is christian eschatology stripped of its mystical elements. That by itself does not tell us anything about the validity of the idea itself, but it does inform the mindset of people approaching it. This lineage and baggage resulted in MAD, which reintroduces a concept of sin and punishment into the whole thing that it didn't really need; It also informs the approach people take to making the nuclear apocalypse happen (Ask yourself: Is there a difference in terms of motivation between christian millenialists who want to create the preconditions for the biblical end of the world, and generals trying to start nuclear war?).

    The thing to keep in mind here is that the nuclear war is indistinguishable from a deeply religious one. Both exhibit a certain resistance to evidence that runs counter to whatever their foundational texts say, both have a giant "and then magic happens" between our present now and whatever future they envision.

    335:

    "Use search and replace on a post I disagree with" is not the effective rhetorical strategy you think it is.

    336:

    It's all very Futuretrack 5 - if you remember the Robert Westall book.

    337:

    This web site is the simplest I visit and it's not free for CS to run. Especially when you consider his time.

    Recurring costs of this site: around £990/year for hosting (it's a full scale colo box, not a VM) plus another £20-40 for domain registration (multiple domains) and maybe £100 in sysadmin fees (I'm cheap).

    But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs.

    I run it as a loss leader/marketing exercise, but if I was doing it as a revenue earner I'd have to sell a shitload of ads or subscriptions to keep it going. (Say, a thousand Patreons at £10/year ...)

    338:

    Incidentally, the not-insignificant running costs are part of the reason for the relatively firm moderation policy and my draconian approach to drive-by and flaming attacks on myself; I'm not paying good money to put up with abuse, and if you want the service to continue you should not spit on the guy providing it.

    This is distinct from the level of moderation you'll find in the comments on more formally funded media outlets, where letting the readers vent is kind of expected these days because it maximizes return visits (hence advert delivery metrics), but nobody is personally invested in the content.

    339:

    It clearly isn't. I was trying to make you address my point that exactly the same argument can be applied to the idea of nuclear apocalypse but failed.

    More generally, it annoys me to no end to see atheists treating Christian mythology as something relevant. It isn't. Please let this meme die. It is much more interesting to examine ideas on their own right than to contrast them to a 2000-years old doomsday cult from the Middle East. It leads to quite ridiculous situations, such as the anecdote I told about the physicist using religion to argue about quantum mechanics.

    340:

    The dominant belief system of a sizable portion of the world's population is hardly irrelevant. Examining ideas "on their own right" without considering their historical and philosophical underpinnings is a really terrible idea.

    341:

    The dominant belief system of a sizeable portion of the world's population is completely irrelevant to whether exponentially self-improving AI is possible or not. It is also completely irrelevant to which interpretation of quantum mechanics best describes reality. It is completely irrelevant to any scientific question that is not related to human psychology.

    342:

    The thing is, if christianity is entirely irrelevant to the question of whether or not self-improving AI is possible and, if it is, what impact its introduction would have on the world, why are there so many parallels between it and the mindset of the people who think that the singularity is a real possibility? And given that these similarities exist, what does that tell us?

    We know that christianity got a lot of things wrong. We know that its idea of an apocalypse is bad and ridiculous and has led a lot of people to making really really bad decisions over the years. So now that those ideas are making a comeback amongst folks who define themselves at least partially by their rejection of the mystical claptrap of christianity, does that mean that the ideas are suddenly more credible? I don't think they are; I think that if self-improving AI ever happens, it will bear little to no resemblance to whatever singularitarians are imagining, and that all the predictions and hopes these people have will make it harder to deal with the reality of the thing as opposed to the concept of it.

    343:

    Re:target and pregnancy. That story appears to be a myth based on a hypothetical:

    https://www.kdnuggets.com/2014/05/target-predict-teen-pregnancy-inside-story.html

    Where a journalist was proposing a scenario that never actually happened.

    Also another example for the pile re: social media optimizing for emotion.

    http://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

    "So let’s talk about Tumblr.

    Tumblr’s interface doesn’t allow you to comment on other people’s posts, per se. Instead, it lets you reblog them with your own commentary added. So if you want to tell someone they’re an idiot, your only option is to reblog their entire post to all your friends with the message “you are an idiot” below it.

    Whoever invented this system either didn’t understand memetics, or understood memetics much too well.

    What happens is – someone makes a statement which is controversial by Tumblr standards, like “Protect Doctor Who fans from kitten pic sharers at all costs.” A kitten pic sharer sees the statement, sees red, and reblogs it to her followers with a series of invectives against Doctor Who fans. Since kitten pic sharers cluster together in the social network, soon every kitten pic sharer has seen the insult against kitten pic sharer – as they all feel the need to add their defensive commentary to it, soon all of them are seeing it from ten different directions. The angry invectives get back to the Doctor Who fans, and now they feel deeply offended, so they reblog it among themselves with even more condemnations of the kitten pic sharers, who now not only did whatever inspired the enmity in the first place, but have inspired extra hostility because their hateful invectives are right there on the post for everyone to see."

    That kind of optimisation-for-outrage-eyeballs we see on many social media platforms has some weird side effects.

    The big weird one being that the platforms transparently optimize for really crappy supporting stories for your own side that are just barely good enough for you to reblog them.

    Are you outraged by police killing people? your facebook feed will only very rarely show you clear-cut stories like a sleeping 7 year old girl shot in the head by cops who got the wrong address with it all caught on badge-cam.

    You might occasionally see such a story... but 100x as much it will show you the other kind.

    And this may not be an artifact of source data.

    Instead it will show you a story where some guy picked a fistfight with a cop and he'd just robbed someone down the street but the cop didn't have anyone with him but some witnesses disagree on details and... etc etc etc

    Such that no matter what side you're on you'll reblog outraged that ANYONE is taking the other side on the case. and then they see your reblog and respond. If it was a super-clearcut story they might not respond, they might just shrug and say "ya, that is pretty fucked up" but it's not, it's just crappy enough that they see the details that support their own side. So it's almost always crappy stories.

    Hate feeding off hate all to boost the number of eyeballs looking until everyone involved hates everyone else involved more than life itself.

    344:

    But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs.

    Yay, someone pricing their time correctly!

    Bear in mind a lot of patches and development work on Linux is contributed by the likes of IBM.

    Speaking of which, the IBM POWER series processors are also vulnerable to Spectre/Meltdown, or at least an attack close enough scare IBM into patching the firmware on the POWER 7+/8/9 series, and they're looking at earlier.

    Which means the only major processor line out there that hasn't been discovered to be vulnerable are SPARC, and to be honest, I'm thinking the word "yet" needs to be there, because I'm willing to bet a similar attack to Spectre is possible.

    345:
    Speaking of which, the IBM POWER series processors are also vulnerable to Spectre/Meltdown, or at least an attack close enough scare IBM into patching the firmware on the POWER 7+/8/9 series, and they're looking at earlier.

    For an amusing war story regarding speculative execution in the PowerPC CPU on the XBOX 360, see

    https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/

    346:

    The dominant belief system of a sizeable portion of the world's population is completely irrelevant to whether exponentially self-improving AI is possible or not.

    Correct, up to a point.

    However, my point is that the people who are interested in exponentially self-improving AI were mostly raised and socialized in a predominantly Christian-derived culture that predisposes them to see certain patterns. The whole idea of intellect separate from body, for example, is classic Cartesian dualism, which in turn was derived from Descarte's deeply-held Christianity (and builds on ideas implicit in a cluster of religions that surfaced in the Middle East in the early iron age). Sudden high speed change? That's another thing that apocalyptic mystery cults are keen on. Mind uploading? See also, souls/afterlife beliefs.

    As human beings we're pattern-matching organisms because pattern recognition is a vital survival trait, selected for by evolution over half a billion years. We also look for eschatological templates into which to slot our new observations and beliefs because it helps us deal with new data if we have an existing precedent. So when I see folks wandering off into exponential self-improvement, mind uploading, et al, what I note is that they're unconsciously not bothering to explore the gaps in the picture, the possibilities for AI that don't stick to the pre-existing map.

    347:

    If your point is that singularitarians develop conceptual blind spots because of their upbringing in a Christian-derived culture, and that future developments that fall in these conceptual blind spots are likely to be ignored, then I agree with you. This seems to me a good source of plausible yet hitherto ignored ideas.

    You seem, however, to be arguing for something stronger than that: that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul. I think this is as unacceptable as rejecting the Big Bang theory because it was proposed by a catholic priest trying to fit biblical creation into cosmology.

    348:

    I'm not denying that the idea of a Singularity can be traced back to Christian eschatology. I'm denying that this is a valid argument against it, it is just an ad hominem fallacy.

    349:

    Those are some mighty mighty broad brushes you're using there.

    It's like the claim that there's only N stories in the world where N is small... if you simplify it to the level where the story description is "bad thing happens, protagonist overcomes bad thing"

    If someone wrote a story with flip phones in 1970 "they're just recycling star trek" would not actually be a criticism that says anything very much about the likelyhood of flip phones.

    Little devices you could carry in a pocket and use to talk to other people were something people like the idea of. That matters when some of those people are engineers and material scientists capable of taking baby steps towards actually implementing it.

    If you've got something that looks like it should one day be physically possible and we've not discovered any solid reason that it will be impossible... the fact that lots of people want it to exist one day because [insert cultural/social/religious/any other reason here].... if anything that makes it much much more likely to actually get implemented at some point.

    If I turned up 70 years ago proposing Daz fabric whitener someone pointing out that it's just a re-hash of some religious belief about purity and purification wouldn't actually be telling us anything useful re: it's practicality to implement.

    I'm pretty sure this is a case of "reversing stupidity is not intelligence". Putting a "not-" before random cultural/religious beliefs does not lead us to correct answers more reliably than a coin flip. Pointing out that something losely matches some belief about the trinity or bugs bunny doesn't tell us much about it's real world practicality.

    350:

    Given the attention drawn to the quote marks, I'll agree with you. If you'd just put them there, and not drawn attention to them, I'd have disagreed. Well, except that you said "by the standards of the time" which I would count as drawing attention to them.

    This is a real problem, but it's important to remember that democracy doesn't work in groups of larger than around 50-150 people. "Representative Democracy" is a very different beast than actual democracy. Charlie's speech does a good job of drawing attention to some of the (obvious?) drawbacks of representative democracy, but it's important to remember that the founders of the US Constitution didn't want a democracy. They didn't trust "the people". Senators were appointed by the governors of states, and various other tactics were used to preserve the power of the currently rich, while making it look like the power lay with the people. And the problem is, I don't totally disagree with them. Democracy doesn't scale.

    351:

    All replicators are inherently imperfect replicators. Perfection does not exist in the physical universe. (Possible exceptions: quantum states may possibly be perfectly equivalent, electrons may be perfectly identical to other electrons, etc.)

    OTOH, it might be possible to have replicators that would essentially only have identical copies or "dead" copies. But do note that "essentially". Still, the DNA code has evolved to be relatively tolerant of single bit errors. A designed code would not necessarily have that feature. But you couldn't make one that worked which wouldn't be tolerant of some multi-bit errors, and each degree of demanded perfection will increase the benefit of mutations away from that code.

    Also there are lots of benefits for being tolerant of single bit errors. So the best thing to do is probably make any errors as unlikely as possible (the DNA process has a self-correcting mechanism that usually works), and try to design things so that the rewards for variation are either minimal or beneficial also to humans. This won't work forever, but it may work "long enough"...a few million years might be doable, and if we start spreading outside the solar system isolation would keep any single malign mutation from killing us off. (For that matter, if we don't get AI rulers, we're likely to kill ourselves off within the century, so given that trade-off...)

    352:

    All human ideas have a history of similar ideas. The Singularity is one of them. Ideas with a similar structure tend to aggregate other ideas that fit with them. But parts of the aggregate can be true or false without implying anything about other parts of the aggregate.

    I tend to think of myself as a "believer" in the Technological Singularity, but I sure don't think there is reasonable grounds for a lot of the beliefs that tend to attach to that. And the more extreme predictions are nearly certainly invalid. The thing is, there are many possible way to end up at "the Singularity" and the use of the word "the" there is an invalid implication. A singularity awaits us in or future, and we don't know what kind it will be. An all-out atomic war would be a singularity. A self-improving AI is a raft of different singularities, depending on the motivational structure, etc. A 90% fatal pandemic would be a singularity. A giant meteor impact would be a singularity. Etc.

    That people won't design an AI that matches a human in capabilities an motivation is essentially guaranteed. The first really competent General AI will necessarily be built before we have the competence to give it a human motivational structure, even if we wanted to. This doesn't say it won't have essentially super-human powers. (As Charlie pointed out, ordinary corporations do that.) But it means that it will have goals that we don't, perhaps can't, understand. It will, of course, be designed to explain itself in comprehensible terms, but that very design means it will be designed to lie. And if no catastrophe wipes us out first, that kind of AI is now inevitable. I'm still guessing that it will show up before 2035, since we seem to be clearly in the ramp-up leading that way. What I don't know is what agency will be building it. It makes a big difference if it develops out of a hospital administrator or an automated Pentagon, or possibly out of a automated airplane or space-ship designer. But it will be some major project, and my guess is that it will evolve out of replacing middle management. (But that's a wild guess, not an assertion.)

    353:

    My personal estimate of mind uploading:

    Yeah, it's possible. But it's so difficult that it won't happen until well after a successful transition through the technological singularity. I expect that even reasonably good models of individual minds (i.e. simulations) are lots easier. (And by reasonably good I mean good enough so that no human could tell the difference.)

    FWIW, I doubt that uploads will ever happen, though simulations may well. Not because it's impossible, but because it's too difficult. Whether it would also inevitably involve killing the person being uploaded I remain unsure about.

    354:

    "that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul."

    It's also a somewhat dubious derivation in that it overlooks a much more obvious source: the idea of software and hardware. That is a matter of everyday experience in a way that Descartes is not, particularly since the pool of people who like those kind of stories includes a disproportionate number of computer geeks. When the idea of the nature of the hardware being of much lesser importance than what software it is running comes into everything you do, making the same conceptual distinction in the case of body and mind is not a deep philosophical insight, but a straightforward case of state-the-bleeding-obvious, and the idea of taking a snapshot of your mind state and installing it on arbitrary hardware is no more than an engineering problem from the time you first have it. Heck, the very term "uploading" is taken directly from computery.

    It's also a much closer match. That mention of Descartes often comes with the attached tag "...but he was wrong", and AIUI the answer to how people can still get away with saying this without everyone going "don't be silly" and waving computers in their face is that what Descartes actually said is in fact mostly orthogonal to the SF/computer concept that gets called by the same name.

    Then, there is no well-defined "Christian concept of a soul"; you can get definitions ranging from the most concrete to the most nebulous ends of the scale, depending who you ask. The concept meant by that phrase probably owes more to the idea of a "ghost", which (like the word) is Germanic and pre-Christian. There are also plenty of other concepts along the same general lines, many of which have nothing to do with either Europe or the Middle East. Refusal to accept death at face value is after all a very ancient and popular mode of thought.

    355:

    Y'know, Charlie, there's a bigger picture here: first, the really, really big companies have a much longer lifespan. IBM's over a century old, now, for example, and let's not mention railroads in the US.

    That being said, corporations are different than AI, because a) they're run by multiple entities (humans), and, even bigger, b) some of these humans are on the boards of multiple corporations. THAT brings the knowledge and attitudes to multiple corps... and, if they're effective/rich/powerful enough, spreads by contagion. And others try to emulate that (similarity).

    And as I type this, I guess that means they're magickal entities, not AI, with their own agendas... and we know most, if not all, want more power and control.

    356:

    EC wrote: "Nobody in 2007 was expecting a Nazi revival in 2017, right?"

    You mean you didn't? Seriously. The signs were all there well by 1997, let alone 2007 :-(

    Um, agreement. Back then, in the early oughts, I was saying that Clinton had been building bridges (some) to the 21st Century, while the Shrub was building bridges to the 1930s.

    357:

    In fact, I've met the "free-range mom". She's currently running for county council here in Montgomery Co, MD, USA (and I support her).

    Of course, I was taking city transit in Philly started with 5th grade.

    358:

    That was because they started making US cars cheaper and crappier by the seventies. Before, say, 1966 (I used to hear arguments about earlier), this make them cheaper, so they'll keep buying a new one every two-three years came in.

    359:

    Really: Everyone I knew, or have met who was over about 11 was paying attention. My late wife, 1954, told me she had, and my new ladyfriend, '58, told me she remembered.

    360:

    ROTFLMAO!!!

    "ESR"? I can see my old, ole acquaintance, ESR, very well known Libertarian, being more than slightly unhappy with that....

    361:

    Oddly enough, I can picture that.

    Perhaps it's because I'm in the middle of rereading an old sf novel, called, um, Saturn's Children, by some non-American SF author....

    362:

    "And the problem is, I don't totally disagree with them. Democracy doesn't scale."

    I don't either, but the problem is who custards the custard. Since there isn't a guaranteed supply pool of Vetinaris to draw from, you have to select governors in some way from the population at large; and since arseholism is a chaotic function, there is no way to define a partition of the population that separates arseholes and potential governors. Arseholes in government are therefore inevitable, and the best you can hope for is a system to periodically replace them with different arseholes, sufficiently often to keep everyone happy, that doesn't involve anyone getting shot.

    The difficulty with maintaining such a system is that any group of arseholes will always try and persist, so something has to prevent the glue setting faster than other people can notice and scrape it off. And Charlie's article is all about accelerants.

    363:

    "But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs."

    But don't you get some return in the form of story ideas or least the ability to bounce ideas off of the general reading public to help guide you writing? If so it seems like time well spent.

    364:

    I am reminded more of biological entities; bees being one obvious one, and also things like bacteria swapping genes with each other on USB sticks, or how the concept of "individuals" doesn't really make sense with a lot of fungi.

    365:

    You might want to foward through the video until you get to hear how I say that sentence.

    Hint: trying for deadpan.

    366:

    I was taking the city bus to school in grade 5 too. Two busses with a transfer (very indirect route). Or in good weather I could ride my bike.

    Nowadays it wouldn't be allowed — enforced through anonymous complaints to the authorities.

    https://www.theglobeandmail.com/news/national/winnipeg-mom-in-hot-water-after-kids-play-in-backyard/article29716605/

    367:

    OK. Sorry about misunderstanding you. I don't listen to video, because it's stressful and error-prone!

    368:

    Please reread what I wrote. A lie is defined as representing something you know to be false as true. Fiction says, up front, this is not true. It is therefore not lies.

    369:

    Not exactly. As I read about it, the o/s providers are doing software mitigation, so that the kernel page tables (kernel memory), and the userspace page tables are separate, and the programs will need to look at two separate tables, rather than one.

    I also read that the hit varies - browsing, you won't notice much, but I've seen an estimate that postresql will take a 17% hit.

    370:

    I will note that I started hearing about Pottering and systemd around the time I read that M$ bought 20% of RedHat.

    371:

    Re Linux's "death grip" - it's a couple of years now, I think, that I read that > 55% of the entire Web was running on Linux.

    Oh, and RH dues actual support for 10 years, though the last four is security and bugfixes,

    372:

    Well, yes, I would like to walk into a store and see and check out what I wanted. The stores, even big box, carry less and less, because it's online.

    Will not make the mistake of buying boots online again.

    373:

    The singularity is total crap. Firstly, no form of computable growth (let alone mere exponential) leads to a singularity. Secondly, there are known hard boundaries, of which the best-known is the Turing-Goedel one. And you can safely ignore any of the drivel spouted about NP completeness. And, yes, it's a modern version of the belief of the end times.

    It is POSSIBLE that different computational models do not have a Turing/Goedel limit, but that's not the way the smart minds think. Quantum computing is hyped up to be a strictly more powerful model, but I have seen no evidence that it is (in the sense of what can be computed). Computing with (true) real numbers is, but is not realisable. I know that people have worked on this, but nobody has ever found one that is both strictly more powerful and realisable.

    There are more powerful, realisable models in the sense that they can solve problems in a reasonable time that are intractable in the simple Von Neumann model, but they can't actually calculate anything that the latter can't. If anyone delivers a working quantum computer, it may be one such, probably for a VERY limited set of problems.

    374:

    Oh, for the Good Old Days of "fair use". I was there for Canter and Siegel, when their ISP's servers crashed three times in the first 24 hours after the Green Card spam, under the weight of incoming complaints of violation of Fair Use.

    375:

    Oddly reminiscent of the John Wyndham story....

    "It is a wine of virtuous powers; My mother made it of wild flowers."

    376:

    I don't think "fair use" means what you think it does. And the complaints were not for violation of "Fair Use." And C&S were late-comers anyway, I remember JJ@cup.portal.com.

    As I recall, I've personally met the first couple of arpanet spammers.

    377:

    atheists treating Christian mythology as something relevant. It isn't. Oh dear NOT EVEN WRONG Both christian & muslim mythology are very highly relevant - if only because they - & their believing followers - can KILL YOU & maim you & torture your entire family & totally fuck-over the planet ( again ) Grrrr .... Sorry about the rant, folks, but talk about missing the point. Yes - WE KNOW it's a load of foetid dingoes kidneys, but they don't, & they have far too much power.

    378:

    Walking to & from infants' school at age 5 - about a half-mile each way, 4 times a day. Yeah Wonder why ( along with cycling 5 miles a day from age 11 ) that Team games & spurts were totally unnecssary to keep me fit & lean?

    379:

    You clearly don't know what you're talking about. First of all, singularity is not meant in the literal diving-by-zero sense, but in the metaphorical physics-loses-predictability-when-hitting-singularities sense.

    Secondly, there is no speculation that quantum computers can solve uncomputable problems. This should be obvious from the fact that one can simulate quantum circuits on a classical computer, and was proven by Deutsch in the very first paper that defined what a quantum computer is.

    But this is anyway beside the point: one doesn't need to go outside computable functions, or even outside BPP, to get an AI that is much more intelligent than us. Keep in mind that AlphaZero runs on a vanilla classical computer.

    380:

    I do not know about steered beams, but I do know that on a cell tower there are usually several antennas, each of which are aimed to cover only a part of the tower's surroundings. I think this is due to transmitter design - it is easier/cheaper to manufacture antennae that have good coverage in in a 90-120 degree arc (and hook 3 of those up) than it is to get an omni (360 degree arc). That said, I am not aware if any base stations are aware of how far a cell phone is from them. Even if they are aware, I do not think this information is being sent to the provider's server park.

    I guess one of the saving graces (for now) is that streaming and storing everyone's more-or-less accurate location constantly is something mobile providers have to actually work to have. So hopefully, it's not just a switch to be toggled and a cable to be hooked up to some simple box. Realistically, it of course already exists and is only waiting to be found due to some leak or governmental fuck up. :)

    381:

    The Modern Western concept of self (or person, or individual - with the complete set of additional concepts we Modern Westerners attach to it) is not universal to human experience. There are notable differences both with extant non-Western cultures and with historical Western cultures. This observation is not new or even vaguely controversial and this lecture by Marcel Mauss from the 1920s expresses just some of the empirical base from which it is made.

    This is not specifically an argument against the possibility of "uploading" and "individual" human mind, but it means that some of the concepts relating to what that would even mean are actually only assumed to be valid and have not been demonstrated. One of these is that there is such a thing as "mind" in the first place.

    It's not some kind of "lefty PC gibberish" to claim Cartesian mind-body dualism is not an accurate representation of the human condition. The problem is with people treating the assumptions of their historical and cultural context as though they are candidate null hypotheses, because... well everyone knows that's true, right? We live in a historical and cultural context that is shaped by its legacy, replete with assumptions that are rooted in dualism, in various religious traditions and in post-Enlightenment Modernity with its various tropes and memes (the PWE for instance).

    Note that drawing a software versus hardware metaphor doesn't really help here. It breaks if you explore it further (eg try to map short term versus long term memory to volatile system memory versus mass storage*). It is itself rooted in a context with underlying cartesian assumptions, so many people "get" the hardware versus software distinction be implicitly or even explicitly comparing it to mind versus body anyway. It's not a very accurate model for how the squishware actually works, either.

    What are the implications of "uploading" if there is no such thing as "mind"? It means there's no structured data to "read" from your brain. We'd never have a relational model of the information in someone's head, we'd never be able express your thoughts and feelings in XML or JSON. But perhaps we could store physical state as a form of raw data. If there's no such thing as mind, then what is the locus of the human experience? How important is embodiment? How important are the various nervous systems, internal symbionts, intestinal bacteria? Can they be included in the model, or at least in the raw data? Is "uploading" then a matter of simulating all the molecules involved along with the chemical, eletrostatic and electrodynamic, physiological and other processes attending? Like a software emulator, but a lot more complex?

    Doing that would require a very large processing capability in conventional terms. It might not be possible to make it work exactly the way that a physical instance of the wetware would work, since it involves simulating the various states required, including state transitions that are subject to uncertainty. Would molecular level be enough?

    382:

    "You clearly don't know what you're talking about."

    Not necessarily about you, but people inside the cult looking out always seem to say that. On the other hand, your interpretation of what others are saying here is flawed.

    383:

    I happen to degree, but the real problem is that most people seem to think uploading means "you" "wake up" "inside a computer" (sometimes after "you" die, which is also a process, not an event).

    Unfortunately for all of us, there's a much simpler way to "upload" a person, at least their online personality. All you have to do is train a computer to respond as you do. It's sort of a cheating variation of the Turing Test*--if no one can tell the difference between you and your online doppelganger, is it you or not? Note that we're not saying that it's "you," only that, unless someone is watching your lips move in real life, they can't tell whether it's you or that computer.

    Unfortunate? IIRC, Google took out a patent on this technology most of a decade ago. If I had to predict, it's going to be developments of this technology that will end up being as good as we get on computer uploads.

    *Actually, it's the ultimate in identity theft, but whatever.

    384:

    Unfortunately, I suspect we'll get the singularity we deserve... damn humans!

    385:

    I should go a step further and state this as a principle: "Every sentient race gets the singularity they deserve."

    386:

    Speaking of Dark State. This morning my Google Android phone alerted me to the fact that "Charles Stross has released a new book 'Dark State' today". I've never told it I was going to buy that book the day it came out. I've not pre-ordered it and I've done no Google searches. It's never alerted me to any other author's activities which is also oddly accurate as there are no others that I'll buy on the first day.

    So it amazingly accurately figured out I was going to buy that book and no others. Weirdly, it completely failed to figure out that it's being released the day after tomorrow in Australia and thinks I live in the USA despite it knowing my home address and Australian phone number.

    387:

    It's probably poor taste to cite Woody Allen these days, but this does make me think of the following:

    “I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.”

    If you s/my apartment/a honking great computer/, this seems to be an expression of the thing the transhumanists wish for, put their faith in. That's specifically in contrast to what you've described, which is a sort of updated, Turing-test compliant version of living on through one's work in the memories of others. Maybe living on as a chatbot would be more than that, but dog knows Google and others could easily do that right now if they chose.

    There are traditionally two ways to attain if not immortality then at least "posterity"*: through works (one's own and those of others) or through having children. The direct memory of others who knew you is not sufficient - they need to write about it and for that writing to be preserved and read.

    The hope I guess we can hold out for is that there will be a last generation to which natural death is a thing, due to advancing medical science, and that generation is older than we are. Unfortunately I suspect that generation is still to be born. And that assumes no climate based population crash.

    • Ways that are know to work, anyway.
    388:

    I see. So you are criticizing me, but it's not necessarily me. And you avoid the question of whether Elderly Cynic was right or not, you are just complaining that I pointed out his mistakes.

    Furthermore, you accuse me of misinterpreting people, but without actually being specific enough to be refutable.

    Like, seriously, man?

    389:

    "I see."

    Exciting, isn't it!

    "So you are criticizing me, but it's not necessarily me."

    Do you think all observations are either criticisms or affirmations?

    Well, are you "inside the cult looking out"? I've no idea, so this was (maybe still is) an opportunity for you to deny being a true believer.

    This is one of those patterns in propositional logic that occasionally trips people. Given P->Q, does Q imply P? The correct answer is no. Q doesn't negate P either. In certain specific circumstances where the probability of encountering Q when P is not the case is known, it is possible to design an experiment to "fail to disprove" P enough times that it is very unlikely that "not P" is true (believe it or not, this is how DNA paternity testing is done), but that isn't available normally, where Q has the same value as an anecdote. Oh, and it trips people because P->Q (if P then Q) isn't the same as P<->Q (if and only if P then Q), and some people seem to have trouble picking which applies - that is, which claim is being made.

    "...you accuse me of misinterpreting people"

    You're choosing to interpret that as an accusation rather than as advice or an observation. I, however, have no interest in making a point and limited interest in arguing. My suggestion is to do something that makes you happy, then come back and re-read some of the things you responded to with the possibility in mind that some different interpretations may be available. Seriously, things are better with less adversarial bickering (man).

    390:

    "First of all, singularity is not meant in the literal diving-by-zero sense, but in the metaphorical physics-loses-predictability-when-hitting-singularities sense."

    Those are two aspects of the same phenomenon. I get very bored with people talking about the increasing rate of IT or other change, and I point out (to no avail) that it is slower than when I was younger and is currently slowing down. My point stands, unchanged.

    I can assure you that there IS speculation of the absence of a Turing/Goedel limit. And, while I could explain why what I posted is correct, I suspect that you would have difficulty following it. I am aware of that proof, but it is not that aspect I am referring to. And, as I said, explicitly, it is very unlikely.

    Your last paragraph shows that you don't understand computational or complexity theory. Even were that class of AI to be more intelligent than us, that is only in the sense that one human is more intelligent than another. And my remarks about NP were referring to the whole of that area, including BPP. Again, I could explain, but I suspect that you would have difficulty following.

    If anyone is interested, the issues that I referred to as tricky are NOT primarily to do with the actual computation, but the class of questions the models are considering, though the computational primitives have an effect on that. Specifically, yes/no versus an actual value, and deterministic versus probabilistic - and you will NOT find the latter in computer science textbooks.

    391:

    And I should know better than to try to represent symbols with plain text when there are perfectly good HTML entities available. That should be P→Q for IF P THEN Q and P↔Q for IFF P THEN Q.

    Anyhow, the pattern I've been referring to is called affirming the consequent, though it shows up with a twin called denying the antecedent. And I can't believe I'm talking high school logic (I blame fever).

    392:

    I'm not interested in engaging in meta-level arguments. If you want to make a point about the actual issues at hand I'll be happy to respond.

    393:

    "As I recall, I've personally met the first couple of arpanet spammers."

    That's going back a bit :-) I have been using the Internet only since 1979, though I was using wide-area networking a bit before that.

    Yeah. Fair use doesn't mean what most laymen think it does. If I recall, there was some discussion (when?) about constraining the proportion that any one agent could use but it was abandoned as a hopeless task. Some nodes have done (do?) it, but it's impossible to specify precisely. It's now politically impossible, as well as technically.

    394:

    You need to read more modern textbooks. This stuff is standard.

    395:

    Oh and ISTR that "AI as mimicry" theme being explored in a novel by a certain SF author.

    396:

    I have a PhD in theoretical physics and have published papers about quantum computing and complexity theory. There is no need to hold back on your arguments.

    As for needing to break outside of BPP to get a superintelligent AI, this is bollocks. It is true that an AI capable of solving NP-complete problems in polynomial time would be scarily powerful. It is also true that such an AI will never exist. But one can still get superhuman intelligence inside BPP. Just imagine a human being that thinks 1000 as fast, or can remember 1000 times more data than we do. These are just constant factors, that do not change the complexity class. Still such a human being would make minced meat out of me in any intellectual competition.

    I think it is clear that this is possible. Adding more memory and CPUs to a computer is easy. The clock rate, on the other hand, is much harder to increase, but is already on the gigahertz range. Our hardware, on the other hand, is fixed.

    And this is just about the hardware. Our software is laughable. It was evolved to hunt in the savannah. We suck at doing math. Now consider a software that is actually designed for intellectual prowess, and can take full advantage of the powerful hardware available. We have no chance.

    397:

    Well at 346 you are not really following Charlie's point, for reasons I go into at 380.

    "You seem, however, to be arguing for something stronger than that: that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul."

    My unpacking would be to say: the concept of mind uploading may not be possible. The only reason most people think "mind" is a valid concept is because it's part of our social-cultural-historical baggage and we treat it specially, in a way that may not be an accurate model for the actual situation. What we know from modern neuroscience suggests that it may be rather more complex - with all sorts of biological factors at play.

    The concept of "mind" that you need for "uploading" is not in fact demonstrated independently, doesn't reflect empirical evidence and we probably have it because of these Christian ideas we caught through our culture. It isn't that it is "tainted" be association with Christianity, it is that if Christianity is the only reason we think it's valid, then that isn't a good enough reason to treat it as valid.

    You may disagree that this is the only reason it is treated as valid, there are all sorts of interesting possibilities to explore (and note that I say explore rather than argue about). But the more that discussion about these points looks like the expression of a divergent, enclosed, even hermetic worldview - which (I think) Charlie has been suggesting things like Roko's Basilisk represent - the more it looks contrived and less likely to have an external referent. The more it looks like an offshoot of the background Christian worldview, in fact.

    I will avoid ranting about how having a RationalWiki implies there should also be an EmpiricalWiki. Philosophy of science is less meta and more core than many folks might think.

    398:

    Every word of which is true BUT What seems to set us & some other animals apart from being slow computing machines ( or even quite quick ones - differential equations processed to catch a moving ball, f'rinstance ) is "intuition" the ability to join up apparently unconnected data & observations to produce "Original Ideas" AFAIK there is no sign of this anywhere on the present AI horizon. And the Go-palying computer was optimazed for playin Go, not as a general AI, IIRC (?)

    If/When such an "ability" shows up or looks as though it is going to ... well, then is the time that a "real" AI as opposed to a very limited one is coming. WHeteher it would look like a Culture Mind, or something much nastier ... well that's the big question, form our p.o.v. isn't it?

    399:

    Not when I last looked. There was some elementary stuff, true, but I am talking about a proper analysis of the theory.

    400:

    I wouldn't bet on "intuition" saving us from irrelevancy. There isn't anything magical about our ability to produce "original ideas".

    There is at least one person seriously working on making human-like AI: Fields medallist Tim Gowers. He is trying to write a computer program that proves theorems in the way human mathematicians do. He wrote a series of blog posts about it starting here. You might also be interested in his lecture about it, or the paper.

    401:

    I wasn't really following Charlie's point. I was explicitly expressing uncertainty about what his point was.

    And I do indeed disagree that Christianity is the only reason why we think that the concept of "mind" is valid. The fact that a large amount of AI/cyborg/mind uploading fiction comes from Japan - a decidedly non-Christian country - is very good evidence against that. I think Pigeon's conjecture from this comment that the wide acceptance of the idea is due to the ubiquitousness of computers and the software/hardware analogy is much more plausible.

    Personally, I think that any physical process can be efficiently simulated with digital quantum computers (known as the Church-Turing-Deutsch thesis), and furthermore I guess that the quantum part is not relevant for human cognition, so digital classical computers should be enough to simulate our bodies well enough. But if you got to this point, I don't see what obstacle to human-like AI/mind uploading could possibly remain. Any relevant biological factors can be simulated as well. And there is clearly a huge amount of irrelevant biological processes that can be left out in order to improve simulation speed.

    Which discoveries of modern neuroscience are you alluding to? And how could they stop a human being from being simulated?

    402:

    You clearly don't know what you're talking about.

    YELLOW CARD WARNING: the above phrase is a classic ad hominem attack, and as such a violation of the moderation policy. If you do it again I'll ban you from this thread (a red card). Do it on another thread and I'll ban you permanently.

    Nuanced disagreement here is fine. Personal attacks are not.

    403:

    I'm sorry, I won't do that again.

    404:

    Re: '... I avoid eye contact because it makes me uncomfortable rather than because I'm lying when I tell you my name.'

    No idea whether this actually applies to you or is a convenient example.

    Anyways, I have met and worked with folks who were unable to do the eye contact thing. Since I have a tendency to ask things outright esp. in a work situation, I asked why. Fortunately, that person told me and there were no hurt feelings either way afterwards.

    In this particular instance, it was a mix of culture and personality. That person's native culture said: never look someone senior directly in the eyes - it's very rude/argumentative. At the same time, her personality tended toward extreme shyness - not a mixer with anyone. And although her interpersonal style was considered appropriate (normal) within her native culture, it's considered borderline rude in the more extroverted West.

    Cultures really make a difference in how and to what extent personality traits are allowed to emerge/are considered 'normal'.

    405:

    And there is clearly a huge amount of irrelevant biological processes that can be left out in order to improve simulation speed. Oops, stop right there ( I think ) How do you know that said biological processes are "irrelevant"? And are you 150% certain of that? It's a bit like what is now known to be the myth of "junk" DNA, doing nothing "useful" isn't it?

    406:

    Also ... And very unfortunately the blog posts mentioned involve a good understanding of Set Theory which is an empty field in my understanding. I've never done any at even the most elementary level.

    A short primer would be very useful

    407:

    Re: '... there is no way to define a partition of the population that separates arseholes and potential governors.'

    Yes there is - it's called neuro/psych. Unfortunately because psychoanalytic theory (aka SigFrd) was the be-all in the US* for decades but was subsequently shown via law suit (the only 'honest' test understood within this culture) to not be reliable, there's considerable reluctance to seriously consider that current psych is of any use.

    BTW, many large corps use psych testing in their hiring process, so if it's good enough for the large corp$**, it should be good enough for the common people.

      • Think we all know this: there's much more cultural spill-over from the US to the UK and other English-speaking countries than in the other direction.

    ** - Okay, the large corps are testing at the entry to mid levels mostly and for quite different and varying personality traits and aptitudes. But the tests exist and have a much better predictive success rate than humans in identifying 'best fit'.

    408:

    Re: ' ... biological processes are "irrelevant"?'

    Yes - would really like to hear this argument, plus see some tests and results.

    Also, definitely would need a primer on 'set theory'. Briefly touched on it in a Phil of Logic course ages ago, but given how often it is mentioned related to computing, this branch has probably grown quite a bit.

    409:

    I suspect that before we can "upload" a mind we'll need to have a perfect simulation of the human brain, including issues like brain chemistry and neural connectivity. Then we'll have to go to the next step and be able to simulate a particular human brain and only then will we be able to "upload" memories and thoughts.

    The idea that we'll be able to simply "run the program" of someone's mind on a futuristic PC without simulating brain conditions AND have that mind agree that it is Bill Smith or Juan Garcia is ridiculous. We'll need to simulate the actual organic substrate.

    410:

    Maybe I should write "...AND have that mind agree that it feels like Bill Smith or Juan Garcia is ridiculous" instead. Obviously the mind's name would be deeply encoded.

    411:

    The overwhelmingly general view of the Singularity does, literally, involve uploading you.

    Now, several items: 1. how do you know you've been uploaded? My answer is to ask if, when you try to observe, via eyes or whatever, what is the you that's looking? Have you been cloned into the computer (two points of view, whose consciousness diverges), do you have only one point of view (is the body dead, or unable to wake, with no brain waves?). 2. It's not so much as a Christian afterlife, or even an apotheosis, it's a way to get around dying. Now, what do you do once uploaded? I'd think you'd want to be able to do things in the RW, unless what happens inside the systems is that much more fascinating. (And can you have sex with another uploaded person?)

    For 2, I actually wrote a short story about an alien race's Singularity that, um, let's say it went a little wrong, and I'm looking for a venue, as I'm still shopping it around -it's a short.

    412:

    Yes, it does, and yes, they did violate fair use, and everyone in the newsgroups I hung out in agreed to that proposition.

    Spam was not within fair use; I don't quite remember if commercial advertising was allowed yet, and certainly not cross-posting to every single newsgroup.

    413:

    That's definitely a problem. And those who are attracted to positions of power are generally the last people who should have access to them. Which lets out just about everything but choosing your selected authorities by lottery. Large democratic systems always present too large an attack surface. Civil service has a history of developing into aristocracy. So does war lords. Monarchy selects initially for the power hungry and then later for the stupid. Oligarchy is actually less bad than most of those, though it also has it's unpleasant failure modes.

    The thing is, if you select your leaders by lottery, it becomes even more important that power be decentralized. Of course, it would be possible to argue that it would be hard to do worse than the current systems, but I'm not certain this is true. We've avoided thermo-nuclear war for over 50 years now...and some of the governments have seemed to mean well (as well as being greedy, selfish, self-important, power-hungry nepotists).

    414:

    My mistake, I should have linked to the last post in the series instead, as it goes over the AI stuff and mostly avoid mathematics.

    He started the series by showing different solutions to some toy problems in analysis, and asked the readers to rate them in terms of clarity, and to guess which were written by humans and which by a computer. His goal was to find out whether his program could pass this sort of mathematical Turing test.

    As such the mathematical content of the problems is not relevant, so I hope you'll forgive for not teaching analysis to anyone.

    415:

    I think that you have a different definition of "The Singularity" than I do, and also a different one that Vernor Vinge used in the original paper. https://edoras.sdsu.edu/~vinge/misc/singularity.html

    I admit that I tend to include things that he does not indicate inclusion of, so that my definition is wider than his, but his initial definition is a lot wider than that of many people.

    By my definition a Singularity is guaranteed, as the current situation has no stable continuation. I hope for a good outcome, though I actually estimate odds of 50% or less. And I count global-thermonuclear war as a Singularity. I count SuperHuman AI as a Singularity. I count 75% or more lethal biological warfare as a Singularity. There are lots of paths to the Singularity. Some of them are desirable. And there's no way to avoid it happening. (We've already come within minutes of global thermo-nuclear war.)

    416:

    I'm just guessing that some biological processes are irrelevant, based on general reductionist grounds. Things like hair growing, digestion (I often think while taking a dump [as I'm doing right now], but I'm perfectly capable of thinking outside the toilet), muscle function, etc., are obviously irrelevant.

    As for the more interesting question about the biological processes happening in the brain, you'd be better off talking to a neurologist, I don't have anything beyond lay knowledge here. Still, I'd be very surprised if the detailed chemistry of the brain were relevant, as opposed to its abstract description in terms of information processing.

    417:

    IMO thats an overly flexible definition of the singularity, which can be effectively summed up as the "end of life as we know it now". I propose that if you use that definition you can take any 2 reasonably spaced points on the timeline of human existence and term the transition "a singularity" and would include but not be limited to the Ice Ages, the colonisation of America and Australia to name but two.

    Its also worth observing that your type of singularities are very much dependent on the viewpoint of the observer...

    418:

    Unless you can reverse entropy, immortality is not possible in this universe. Even if you could it would be dubious. And I'm not even sure it's desirable. Ask me in a couple of thousand years.

    FWIW, I think that the mathematical concept of infinity is a mistake. It's something that they picked up because it's hard to think about edges of things...they're never sharp. You could think of this as a corollary to Heisenberg's uncertainty. I suspect, however, that we're a sufficiently long way from the universe edges that we can't see the fuzziness. My guess is that the universe becomes discontinuous at around 10^-33 cm. Black holes certainly don't have sharp edges. Etc. So infinity is a useful crutch to calculate with. The mistake comes when you think of it as representing something actual rather than a good approximation.

    419:

    Actually, while human mental processes are poor for some purposes, for others they are quite good. Object recognition in the face of adversarial imagery, e.g.

    That said, it's not clear that a "super human AI" would be designed to do math well. That would probably be a mistake in the design. (Handle that through library calls to a separate non-intelligent system.)

    What a good AI design would have as it's advantage is the ability to replicate identical copies and segment the tasks onto different machines...O, yes, and good communication with it's twins. It would scale it's breadth of intelligence so that each shard could have a deeper intelligence. (The final evaluation function might not be as good as that of a human, but it would be several plys further into evaluation.) It's not clear to me how this analysis translates into a "deep learning" system, but I'm rather certain that it does. And I'm not certain that "deep learning" is the final model. (I tend to think it will be a blend of "deep learning" and an improved "evolutionary computing" model.)

    But do consider how easily the non-intelligent sub-tasks could be handed off to a library routine. And most tasks that don't involve search or compression are basically non-intelligent.

    420:

    Computer AI has already shown intuition in limited domains. In well defined domains computer AIs have shown intelligence superior to humans. Alpha Go is a series of examples of that.

    So intuition is not something that distinguishes humans from AIs. It is interesting that no-limit poker was a lot harder for AIs to handle than poker with limits. They had to come up with new ways to handle things. But they did.

    General cases are inherently more difficult to handle than more specialized cases, but that's not evidence that they can't be solved. Of course, sometimes they can't, but in all cases that can be proven to be something that a computer can't solve, people can't solve it either...though they often use good heuristics to come to "good enough" solutions. And there's no reason computers couldn't use the same approach.

    421:

    I gut feel agree with you on this one. There is a lot of circumstantial evidence to suggest that the individual features are more important than the whole in a lot of biological processes, and whilst I'm teasing a little I'd point to the works of Peter Watts covering human vision and consciousness as proof enough for a SciFi blog.

    Less tongue in check and more in my sphere of learning - there is huge evidence that we don't see what we think we see, and we see similar optimisations appearing in spontaneously in machine vision models. Given that vision is so intimately tied to our processing of that vision it doesn't seem a huge leap to assume something analogous with the simulation of that processing.

    That doesn't of course preclude the devil being in the detail - and just because a simulation of a person is dependent on selected details it doesn't follow that they are easily identifiable. that leads to 2 alternatives to whole biological system simulation - the "point details" approach or the "low gross resolution" approach. i.e. simulate a small amount well or a large but finite amount just good enough.

    422:

    For most purposes, all you need is the very basic elements of set theory. I actually have some strong disagreements with most modern set theory which involved things like infinite sets, but the basics are quite simple. Sets hold uniquely identified objects. If you take the union of two sets, things which are in both only exist once in the union, but everything that was in either is in the result. Etc.

    If you know maps in Java or hash tables or even red-black trees, a set is like one of those with unique keys. They're the basis out of which, among other things, SQL was created. If you know databases or key-value stores, a set is one with the requirement of a unique key.

    (OK, I said I didn't accept infinite sets. There are also sets whose membership is algorithmically defined. I'm quite dubious about them, as some of them tend to lead to conclusions that I find unreasonable. But if you don't accept them, you also can't accept the irrational numbers...which I tend to think of as convenient computational tools that are "close enough" to accurate for most purposes, but which should never be though of as actual. Like pi. 2^64 digits should be enough for anyone.)

    423:

    Well, my definition is only a slight extension of Vernor Vinge's, and he wrote the paper that it's all based on. Of course he called it The Technological Singularity, and I don't limit it to that as I'd include a giant meteor impact, but most of the examples I gave are actually the result of humans using tech to totally change the way things work. That certainly applies to both thermo-nuclear war and 75% lethal biological war. It wouldn't apply to a natural pandemic unless that was caused by, say, densely raising pigs and chickens close to each other.

    I will grant that a lot of people have focused heavily on a small percentage of the things he considered. The ones that they find hopeful, without too much regard to probability, but that's just the way people normally work. You pick the future that you want and try to reach it.

    424:

    I'm just guessing that some biological processes are irrelevant, based on general reductionist grounds. Things like hair growing, digestion (I often think while taking a dump [as I'm doing right now], but I'm perfectly capable of thinking outside the toilet), muscle function, etc., are obviously irrelevant.

    Not necessarily.

    Researchers have identified gut microbiota that interact with brain regions associated with mood and behavior.

    https://www.sciencedaily.com/releases/2017/06/170629134241.htm

    The brain has a direct effect on the stomach. For example, the very thought of eating can release the stomach's juices before food gets there. This connection goes both ways. A troubled intestine can send signals to the brain, just as a troubled brain can send signals to the gut. Therefore, a person's stomach or intestinal distress can be the cause or the product of anxiety, stress, or depression. That's because the brain and the gastrointestinal (GI) system are intimately connected.

    https://www.health.harvard.edu/diseases-and-conditions/the-gut-brain-connection

    This avenue of research has been around since the early 1900s, when doctors and scientists wrote a lot about how the contents of the colon—and harmful bacteria living there—could contribute to fatigue, depression and neuroses…

    These early forays into understanding the influence of gut bacteria on the brain were eventually dismissed as pseudoscience. However, in the past 10 years scientists have begun to reexamine the link between the gut and brain. As more studies are done, researchers are discovering that communication between the gut and brain is actually a two-way street. The brain influences both the immune and gastrointestinal functions, which can alter the makeup of the gut’s microbiome. In turn, bacteria in the gut produce neuroactive compounds—neurotransmitters and other metabolites that can act on the brain. Some of these have also been found to influence the permeability of the blood-brain barrier in mice, which keeps harmful substances in the blood from entering the brain.

    https://www.scienceandnonduality.com/can-gut-bacteria-shape-our-emotions/

    There's similar links between thinks like muscle movements and moods. Simulating an emotional expression (eg. smiling or frowning) make you more susceptible to that emotion.

    Or look at the effects of hormones on behaviour. Testosterone levels and risky behaviour are correlated. Or things like parasitic infections (such as toxoplasma gondii)…

    https://en.wikipedia.org/wiki/Toxoplasma_gondii#Behavioral_differences_of_infected_hosts

    https://www.theatlantic.com/magazine/archive/2012/03/how-your-cat-is-making-you-crazy/308873/

    https://www.goodreads.com/book/show/25897836-this-is-your-brain-on-parasites

    The mind and body seem (to me, anyway) to be intimately entangled.

    425:

    A question that enthusiasts rarely bother to answer is:

    Assuming that mind uploading is possible, what do you have to offer that makes it worth peoples while to expend the resources to keep you running?

    I remember reading one short story that had a subscriber to a bankrupt AI heaven spending the rest of his existence classifying other peoples spam. That seemed optimistic to me.

    426:

    I was taking the city bus to school in grade 5 too. Two buses with a transfer (very indirect route).

    The difference between Primary (until 11/12 years old) and Secondary (12/13 onward) at our Ministry of Defence-run boarding school, was that Primary children had to be picked up from the school at the end of term.

    Secondary children were handed tickets and passport, taken to Dunblane railway station, and left to get on with it. In many cases (rather a lot of parents were based in Germany as part of BAOR), this involved train to a city, bus to the airport, and multiple flights; central Scotland to (say) Hanover Airport. We just got on with it, and saw it as entirely normal (although granted, there would normally be a few others heading in the same direction).

    I did once get mugged, at age 14 or so, in Green Park tube station - travelling from Inverness to Dusseldorf, solo - but that's a different story.

    Nowadays it wouldn't be allowed — enforced through anonymous complaints to the authorities.

    One of our neighbours made a confidential complaint to the local Council about another neighbour's chickens in their back garden. The Council, regretfully, enforced a "no livestock" rule "on the basis of an anonymous complaint about the noise of the animals", and the chickens had to be removed. The fact that our houses back directly onto mature woodland, inhabited by pheasants, deer, foxes, and owls, was seen as irrelevant.

    The Council then wrote to the complainant - except the letter was accidentally delivered to, and opened by, the house who had just lost their chickens... Dear Mrs.X, regarding your complaint about the noise of the chickens from number 19, we hope that....

    Of course, no-one told Mrs.X... Amusingly, the delightfully two-faced Mrs.X also tried to cover up by suggesting to a different neighbour, who like everyone else had heard the tale of the letter, "that it was obviously that pair (my wife and I) who had made the complaint"... they didn't tell us until after she'd moved away, thankfully ;)

    427:

    Greg Egan explores this question at length in Permutation City.

    One thing you could do is write science fiction novels ;p

    428:

    Your brain controls your gut, nothing new here. When your gut malfunctions, you're unhappy, when you have enough food in it, you're happy. Nothing new here either.

    As for the brain ordering the mouth muscles to smile making you more susceptible to happiness: no evidence that anything is happening outside the brain. In any case, Stephen Hawking is almost totally paralysed, but I still count him as intelligent.

    As for the hormones: they are just dumb signals sent into the brain. One can quite literally simulate their effect by injecting someone with the compound (as opposed to letting it be naturally produced by the body).

    In general, there is no information processing happening outside the brain. All these system you mention can be simulated by simple sets of inputs/outputs. Furthermore, we have good evidence that the brain can work without them, from people that do not produce hormones, or have muscle paralysis, or have their stomach/intestine removed.

    Of course people are not happy about having their body chopped up, but they are still clearly conscious and intelligent.

    429:

    I don't think such a general definition of Singularity is very interesting, especially if it is so broad that it is guaranteed to happen. I find more interesting the narrow question about whether exponentially self-improving AI is possible.

    (Maybe I should add that eternal exponentially self-improving AI is obviously impossible, as it would quickly hit fundamental physical bounds and taper down to a sigmoidal growth. But sigmoidally self-improving AI doesn't quite roll off the tongue ;)

    430:

    As for the hormones: they are just dumb signals sent into the brain. ... In general, there is no information processing happening outside the brain.

    My understanding is that that opinion is in contradiction to current state of play, and looks increasingly likely to be either the result of a definitional decision or a misunderstanding. Viz, if you define information processing as what the brain does then your ideas are tautologically correct.

    Once you start digging into "how does external information get into the body" detail work it all gets much more exciting. At the trivial end, "how many senses do people have" has gone from four of five to between 10 and 20 depending on who's counting.

    We can also play "what is human", asking whether mitochondria are part of us or separate species (I suspect most people say part of us), but from there there's a whole range on up to the spiders that live in our eyebrows, some of which we can live without and some we can't. As antibiotic users are increasingly discovering, "can live without" is a spectrum and at the bottom end quality of life is pretty poor (you can also live without a kidney, 3/4 of your liver, all your limbs, a big chunk of your spinal cord...).

    The point is that some of those species most definitely process information. If they're part of us... information processing is happening outside our brains. If they're not part of us, they're just symbiotes we can't live without, there's still information processing it's just not done by us.

    431:

    A question that enthusiasts rarely bother to answer is: Assuming that mind uploading is possible, what do you have to offer that makes it worth peoples while to expend the resources to keep you running?

    There's a trivial solution to the question of how many resources it takes to run a human mind: about 500 watts, and a CPU that consists of about 1.5-2 kilograms of impure oil/water emulsion, aka a human brain. The 500 watts includes running the peripherals (i.e. your body) on an ongoing basis.

    The original human brain that we're emulating may be the most compact possible platform for a human mind to run on; but it also does a bunch of other things that we might be able to get rid of via deduplication. For example, every human neuron contains its own nucleus and DNA replication framework; if it's not directly implicated in the neural functioning we're interested in (i.e. producing and propagating an action potential and trans-synaptic signal processing) then we might not need to replicate it on the order of 10^11 times in our emulation.

    So my guess is that a mature mind uploading technology might require well under half a kilowatt per mind, which is supplied by about two square meters of photovoltaic panels in Earth orbit (above atmospheric transmission loss).

    Upshot: a mature mind uploading technology should in principle be extremely cheap to run, in today's energy-economic terms.

    432:

    I think you're onto something here.

    Someone who has invested untold resources of effort, time, and belief into a theory that human thinking can be mapped onto pure number crunching is going to have trouble coping with ideas that are contrary to this notion.

    For this person, the effects of something like Toxoplasma gondii, or any other organism that causes measurable changes in thought processes (up to and including the rabies virus, I fear), might be dismissed out of hand as irrelevant, even if they test positive for the parasite. They'll probably define psychoactive drugs, from caffeine past LSD, as irrelevant as well.

    The even more fundamental problem with any ideologue that espouses purely mathematical models about reality is that there's a subtle but pervasive confirmation bias at play. Math is about stuff that can be quantized. Someone who's biased towards reality being mathematical will point to the successes of math at explaining phenomena and claim that as evidence that they're correct. A skeptic of the theory will point to all the hard-to-quantify blobs and smears in the universe, at all scales, as evidence that the math-heads only notice the tractable problems, and thereby fool themselves into believing that math is all.

    This is all very abstract, so here's an example: fungal population genetics. Population genetics in general is about how variations of alleles manifest in a population, and it has all sorts of neat statistics. Doing population genetics on fungi is hard, because, among other things, it's hard to determine what constitutes an individual fungus. For most multicellular fungi, if you chop an individual in half, you have two fungi, and if you let these two ramets sit next to each other, they might fuse back in to a single individual. Indeed, a good chunk of the phylum basidiomycota reproduces by fusing individuals, and its normal in many basidiomycete species to find individuals one genome (n) as well as two genomes (n+n). Nuclear fusion (where n+n becomes 2n) only occurs in these fungi in cells that produce haploid (n) spores.

    What counts as an individual to a fungus, either genetically or physically? It's something that's easy to experience, hard to count, and central to life on this planet. Personally, I started seriously thinking about Buddhism's whole point of the fallacy of ego existing while I was trying to understand fungi, and that's how we get back to the idea of being able to upload a human into a computer. Humans are so blobular and boundary-ignoring that I'm not sure we can even do that effectively.

    433:

    I just had thoughts about the combination of deduplication and running multiple human simulations. I HOPE YOU ARE HAPPY.

    434:

    I started seriously thinking about Buddhism's whole point of the fallacy of ego existing while I was trying to understand fungi,

    {grin} I just read New Scientist and try to nod knowingly at the right moments. I'm more challenged by "where does the brain end" and "what mechanism produces consciousness" way back in time. I mean, I have been exploiting bugs in dog brains since way back, but I'm also aware that (most modern) dogs only exist due to bugs in human brains[1].

    For me there was a crisis moment when my increasingly desperate attempts to pretend that mind-body dualism was possible went to shit both literally and figuratively. I was given some fairly nasty antibiotics which did things to my microbiome. After feeling out of sorts to the point of suicide I "discovered" a link between fixing my yuppie first-world non-problem of "non-celiac gluten sensitivity" and mental health. Diet changes, then eating various disgusting cultures not only made various gut upsets drop dramatically in frequency, I felt better too. Causation seemed very much to run from diet to brain, albeit via "brain says: change diet", arguably preserving the mind as paramount.

    Note that there's a difference between "I'm sick and that makes me feel unhappy about being sick" and "I have a minor physical problem and also clinical depression". Please argue about which applied to me somewhere else if you have to argue it at all.

    [1] viz, working dogs are not a bug, because laziness is a valuable trait. But cuteness is an exploit - dogs are not human babies.

    435:

    "...mature mind uploading technology might require well under half a kilowatt per mind"

    I had also considered dedupe for genuinely uniform entities. Something like an ontology and a collection of classes, though I am not seeing clearly whether or not instantiated entities need to be mutable anyway. Optimised, that might not matter if mutability is not often required (a copy on write strategy would work I suppose).

    I'd suggest that a larger saving on energy comes from the fact that while the system might need to emulate energetic processes like muscle contraction, respiration and digestion, the actual energy expenditures for these processes would not be required. Conversely, emulation is more compute intensive than running something natively, though I agree mature emulators benefit heavily from optimisations. We just don't know how complex the required emulation might be at this point.

    There is a query and potential problem that occurs to me, one that dedupe touches on. In a way it depends on how deep we need the model to go, as I asked above whether emulation at the molecular level is deep enough. I get that where uncertainty comes into play, a lot can be modeled with heuristics or truth tables, or even rules of thumb (which I suspect is why our theoretical physicist friend is so confident that everything can be done from within a classical computing paradigm). But this doesn't need to involve uncertainty... treating hormonal responses as "external" and reducing them to inputs/outputs still implies some kind of algorithmic gaming around how the "external" system determines these inputs. Likewise gut bacteria and other internal symbionts.

    The query is in terms of what mischief could be wrought on the subjective experience of the emulated person through gaming the heuristics or simplification rules in place for any sub-entity that is simplified. I'm not suggesting that modeling these additional entites separately isn't achievable, but the question is more around how subjective the resulting experience remains, versus how much it might be biased to certain responses in a way that doesn't occur on the squishware. Whether that means preserving free will is a matter of jealously guarding the integrity of your random number generator. For example, the statement "the brain controls the gut" is somewhat naive: if anything it's the other way around: certain inputs will predispose the subjective entity to certain viewpoints and decisions.

    Obviously we're subject to this stuff as we are now, but the difference is that in the emulated version someone gets to make a conscious decision about how these things would work. There is a definite cession of some control over subjective experience (no matter how a process of uploading might actually work) to external parties. To the level of whether running, say, on Dell hardware may, just by coincidence, lead to your views aligning with the interests of the State of Texas.

    436:

    Look, if you want to be simulated together with the spiders in your eyebrows, have fun. I, for one, am more interested in the thoughts from my own brain, and am happy to not need to clip my toenails.

    437:

    The question is how much you would be you, and stay you, without those. Say, if you never ate or breathed. That's the very top level of "unnecessary functions that can be dispensed with" but we don't know whether they're actually dispensable by humans. Inherently uploading produces people who are differently human, but there will likely be a difference between differently-human-I-like-this and differently-human-and-broken.

    There's a variety of stories that mention or explore the consequences of ideas like having more processing power available. Trivially, if you live 1000x faster than meat puppets they might as well not exist. Even if you can slow down to their speed when you want to, will you want to? I can watch youtube at 1x speed, but I avoid doing it. I prefer 2x, and some things I download and play back at 3x to 5x speed. That's a hassle, but it suggests to me that given the option of interacting with youtube at 0.001x speed I would strongly avoid it. If only because while I'm doing that, the other kids at 1000x speed will be having 1000x more fun that I have to catch up on when I go back to normal speed. Kinda FOMO, kinda practical "I went away for a year, what happened?"

    It's possible-to-plausible that a lot of what happens when you eat is driven by how your gut flora react to it. So failing to simulate those could make eating very boring, possibly to the point where simulations discard it because it's just pointless. One possibility is that the feeling of finished eating has two sources - hormones and physical stretching. If you take away the hormones you're just eating until you're tired of eating. But those hormones don't all come from genetic-you, they also come from the passengers. Take away the "not information processing" by the passengers and what happens?

    Over the longer term, we know that people who lose sensation in body parts suffer mental oddness. It's not especially studied because it used to be rare and those people need lots of physical care (which also leads to "aren't you grateful to be alive! How do you feel?" ... "grateful". So... without those passenger-provided feelings that come from food, would you also become odd in a characteristic way?

    438:

    running, say, on Dell hardware may, just by coincidence, lead to your views aligning with the interests of the State of Texas.

    What's the difference between an advertisement and a hack when the target is software?

    439:

    For some reason this seems apropos:

    https://www.xkcd.com/793/

    440:

    "...Toxoplasma gondii... psychoactive drugs... irrelevant as well"

    Not irrelevant, just not worth the effort to consider the trivial simulation difficulty of tweaking a bunch of global coefficients. (That is an oversimplified description even by my own terms, but it's the right sort of order of magnitude.)

    What is irrelevant is the whole argument over simulating just the brain or the rest of the body as well and if the latter how much of it. I'm not sure whether the fixation with partitioning at the brain comes from taking the computer analogy too far or simply because approximately all the stories that describe the process do so in a brain-oriented style, but it's making the wrong conceptual split.

    Computers are explicitly designed to reflect the conceptual separation between hardware and software in their physical instantiation; it's almost in their definition, if you look at how the question "what was the first computer" attracts answers that use "the first stored-program computer" as the threshold, and it's certainly the default for anyone these days imagining "a computer (not further qualified)". But that same hardware/software concept applies just as much to something like, say, for instance, a pair of scissors or a chisel. Certainly it's far harder and less natural to find a physical reflection of the separation, and (English being what it is) correspondingly harder to describe it, but it's there all right, as is emphasised by such occurrences as buying some el-cheapo scissors from the pound shop and discovering on getting home that the cross-section of the blades is rectangular, or the handle falling off the chisel. You can still do all the usual softwarey things; copying is obvious; a pair of pliers shares a software module with a pair of scissors; a cylinder mower shares a different one; I once reprogrammed a pair of scissors to add a circlip-pliers function, etc.

    The system we're talking about simulating here is much more like the scissors than it is like a PC, but that doesn't mean it can't be simulated, it just means you have to go about it in a different way and it's a lot more difficult. The question of whether you need only simulate the brain or whether you have to simulate some/all of the rest of the body too is merely one aspect of the overall question of how detailed does the simulation have to be to attain useful accuracy. It may affect how much processing power you end up needing, but it says nothing about whether the idea can or can't be realised.

    I absolutely don't think it's easy enough to be realised on any envisionable timescale, or quite probably ever by humans at all; and I think this is a good thing, because although it makes for some jolly good stories it would be bloody horrible if it existed in reality. When I'm dead I want to stay dead, thank you very much. I also think it's possible that fundamentally intractable engineering problems would crop up that would prevent any attempt from succeeding no matter how capable the engineers. But I don't see any reason to consider it impossible in the way that climbing out of a black hole is impossible; it's just ridiculously difficult.

    The one thing that I can think of that would make it impossible would be the existence of a "soul" - "spirit" - "vital principle" - whatever you like to call it: some ineffable non-physical constituent that entirely eludes detection, let alone analysis and simulation. Everything else is just degrees of complexity.

    441:

    Bottom-up simulation quickly runs into problems.

    I think one insurmountable problem would be if it turned out that quantuum effects play an absolutely critical role in distinguishing two similar brains from each other. We'd be left fiddling with the hidden variables until we found a combination that produced something we agreed was like the original.

    Similar problems apply at lower levels of detail, just because it's currently considered unwise to take someone apart to find out how they work. At the point we can put them back together we are in territory I can't imagine. So any level of simulation that requires destructive scanning will be problematic.

    442:

    "So failing to simulate those could make eating very boring, possibly to the point where simulations discard it because it's just pointless."

    I'd discard it like a shot if I didn't have to do it to stay alive. Absolute pain in the arse it is. And breathing, too.

    443:

    Top-down simulation is what we're doing now, but that is just the Turing test by another name. I think we're going to see this from criminals before we see it offered as a service, just because a barely-usable simulation has proven incredibly useful to criminals already. FFS, people still fall for "I am a Nigerian Prince", they don't even have to go for "I'm your friend Bob, I got mugged in foreign, send money". Imagine the latter when it's able to make a video call that you can't distinguish from your actual friend Bob... unless Bob is standing next to you saying "WTF!".

    444:

    I don't think any of that is incompatible with my post.

    445:

    {eating} I'd discard it like a shot if I didn't have to do it to stay alive.

    Right now is not the best time to make that argument to me in particular... We're just past summer solstice so I can buy hot cross buns again. If I could live on hot cross buns and the particular artisanal sourdough fruit bread I like, I would do so (at least for a while).

    I was wondering at a more philosophical-engineering level whether people who don't eat might be qualitatively different from people that do in a way that turns out to be important. Like people-who-eat-electricity will be different from meat people, but more subtle.

    446:

    Oh, and I'll do you a dare.

    Answer enough correctly, you get the key to the (Greek Mythology Analogy: Titan Locked Zone) where within 20 seconds you'll discover that...

    Your little Brain isn't exactly alone up there. You're more like the lichen to the Loas.

    447:

    As other people have pointed out, mind uploading is based on the Cartesian notion of the mind/body (originally body/soul) duality. Unfortunately, this duality has been largely flattened out by medical science, which chased down more and more and more things which were thought to be the bodily "seat" of the soul and proved that nope, they're actually doing something else. Now we're faced with neuroscience and psychiatry/psychology, which are engaged in the process of proving pretty thoroughly that "mind" as we think of it is basically the emergent result of having a physical body which functions in certain ways - and also, by changing the physicality, you alter what the "mind" is capable of.

    To use the most obvious example I have to hand: I'm on the autism spectrum. Which means my brain is literally wired differently to a neurotypical person's (they're finding people on the autism spectrum tend to have a greater degree of neural interconnectivity than neurotypical people) - a small physical change which makes a very real difference to the way I think, and in some cases proves near-disabling in context. Let's put it this way: figuring out how social interaction worked from scratch as a child was not a pleasant process to have to go through.

    I have chronic endogenous depresssion (and have done since I was a small child - it's not known whether this is directly because of the autism-spectrum differences, or an emergent consequence of living with them in a world which is designed for neurotypical people), and I've taken SSRI type antidepressants for this depression in the past. I stopped because they stopped working for me - my brain apparently figured out how to be depressed around the anti-depressive medication, which is NOT SUPPOSED TO HAPPEN according to the biotechnologists who design such things. But it did in my case.

    Final example: the whole process of getting onto psychiatric medication of any kind is basically a wonderful process which has as its nearest physical analogue "throwing darts at a dartboard blindfold and hoping to hit the bullseye".

    For those who haven't been on what I term the "meddy-go-round", it works like this. You go to your GP and they offer you an antidepressant - usually one of the SSRIs. You take this for about six weeks at a low-to-moderate dose, and then see whether the depression has gone away, and whether the side-effects are what your doctor deems to be tolerable for you (whether you agree with them may or may not come into the picture). If the depression has gone away (and you have about a 50% better chance of this on your first try with an SSRI, because placebo effect is a Thing here) you're encouraged to stay on the anti-depressant, because obviously it's working. If you're still depressed, you're either given a stronger dose of the existing anti-depressant (which means waiting another six weeks for the level in your bloodstream to titrate up) or you're taken off that one, and put onto another one (six weeks minimum to come off the first one, longer if it's one of those ones which requires tapering off; six weeks to go onto the next one - so just changing medication requires three months of your time) to see whether that works. If it doesn't, you may be referred on to a psychiatrist, to try a few more combinations or permutations of medication, each with their six week minimum effect time, and each with their own particular combination of side-effects (and each of which has to be balanced with anything else you're taking on a regular basis. For example: I'm also on thyroxine for an under-active thyroid - and what happens if you take a psych med with thyroxine is the psych med becomes less effective, and so does the thyroxine). And of course, even though your brain isn't supposed to acclimatise to these things (because they're supposed to be supplementing or moderating the existing brain chemistry) occasionally (such as in my case) it figures out how to, at which point you need to start the whole wretched process all over again.

    As I said: throwing darts at a dartboard blindfold - eventually your medical team may hit the bullseye, but my gods, it can be a long and drawn-out learning process until they do. (By contrast therapy is debugging a system while it's still running without being able to look at the original source code).

    Come back to me about mind uploading when we've figured out how to cure serious mental illness on a reliable basis.

    448:

    Disclaimer: unlike theoretical Math / Game theory nonsense, this is one of those "Games" you play on the old old rules.

    Win. Or. Die.

    Adapt. Or. Die.

    Blend with the Entire Universe. Or. Gibber and die (usually suicide but hey).

    It's very very unusual that anyone talking about the Singularity has any actual experience with the big big wide world.

    (and - sigh, would you kindly stop imagining that anything is a physical threat here. If Disney can change the horrors of eugenically deterministic 'midichlorians' back into actual physics I think we can all calm down).

    Shoot me a paper or two: I don't see anything interesting in Math / Game Theory applications you're doing, perhaps I'm wrong. [Hint: Rare]

    p.s.

    OHHH DAT SATELLITE. ALL GETTING ANGSTY, EH?

    If you're paying attention the initial pictures all got pulled and have been replaced by the official version: http://www.photosofstuff.xyz/Zuma-by-SpaceX/

    Hint: wrong time of day there boys. The originals, well... a bit earlier on. Far far more detailed burst of blue on the 2nd / 3rd stage launches.

    Oh, there you go: https://i.redditmedia.com/Daq3U0f10WqEXSZF0xsv84nAXOd5xuxllAFy-RCnbD4.jpg?w=1024&s=c4050892b12544873399798f198f850a

    Compare n contrast.

    Lichen Do Game Theory. (No, really: please don't. Take some mushrooms and stop assuming your interior blankness is universal, it's really fucking boring).

    449:

    Normally I don't like this kind of post (which I'm sure shocks you), but the notion of lichens doing game theory is worth a grin. When it's unclear what qualifies as the entities playing and what qualifies as the game, who plays what and how? Part of the complexity is that it looks like unicellular basidiomycetes might be important in making the ascomycete/algal relationship work better, much as various populations of unicellular ascomycetes (specifically but not exclusively Saccharomyces cerevisiae) are critically important for mediating the symbioses between Homo sapiens and Triticum aestivum, Hordeum vulgare, and many others. Except that the lichen yeasts seem to be inside the lichen tissue, rather than external to the symbiotes as with humans, yeast, and grains.

    450:

    And that leads on to a broader question about what sort of experiences you would still want to (be able to) have, and the related question about the capability of an emulated sensorium to deliver them.

    Take alcohol as an example. Certainly this could easily be simulated by introducing a virtual ethanol into the virtual brain chemistry of the emulator. However, in our world now most people who consume alcohol do not do so by injecting a standardised dilution of ethanol. Yet including the experience of it in a virtual sensorium without including a model of the traditional method of consumption is surely creating an experience more like injecting than drinking.

    Or conversely, consider the situation of someone in hospital recovering from surgery, being handed a dongle with a button that when pressed one, administers one dose of morphine. I haven't been in this situation personally, but I know several people who have and I'd be stunned if there were not several people here who have. Isn't the simulation going to essentially hand you that button, forever and at all times? Won't people who have that automatically be "different"? I mean, regardless of whether there are other mechanisms to suppress (or hey, it's a wacky old world, induce) pain?

    451:

    Lichens make Fungi look like amateurs when you get down and dirty. (Temporally they did it before fungi existed in a precursor type of way but I'm not sure you're allowed to know that yet, but the hacks they did it with without the free stuff everyone assumes as a base is pretty impressive). Or put in a way for non-biologists: before coal and fungi there was something else. It was just slow.

    Slime Mold beats AI @ traveling salesmen models. Etc.

    Search terms: hydroxyl radicals, Chelators. http://pubs.acs.org/doi/pdf/10.1021/jf60126a004

    Oh, and by the way: you're about to enter the 10k wild ride. Strap in.

    The oxygen content of the open ocean and coastal waters has been declining for at least the past half-century, largely because of human activities that have increased global temperatures and nutrients discharged to coastal waters. These changes have accelerated consumption of oxygen by microbial respiration, reduced solubility of oxygen in water, and reduced the rate of oxygen resupply from the atmosphere to the ocean interior, with a wide range of biological and ecological consequences. Further research is needed to understand and predict long-term, global- and regional-scale oxygen changes and their effects on marine and estuarine fisheries and ecosystems.

    Declining oxygen in the global ocean and coastal waters Science 05 Jan 2018

    ~

    Anyone who still thinks their Mind / Brain is an autonomous unit strapped into a meat sack .... is an idiot.

    Hint: Your DNA shares 90% with worms. You got hacked a while back there, Mr "Game Theory" MAN.

    452:

    “I'd discard it like a shot if I didn't have to do it to stay alive.” Wow. Whereas if someone uploads me into a simulation that doesn’t have curry and sushi and ice cream, I’ll ask them to switch me off and let someone else have my compute cycles.

    I think how one weights the importance of such things affects one’s estimate of the detail and scope a simulation would require to be worthwhile.

    453:

    (and, yeah: totally a paper from 1963: the Future is badly distributed or silenced or put into a bag and thrown into a river or hit over the head with a spade or... making fun of people?)

    Punchline: Mateus Araújo, you really do never want to meet the Ang(l)els of your theory.

    They're not nice. In fact, they're pretty psychotic.

    Riddle me how:

    Quantum Theory Math Lichen Higher Order Thinking

    All tie together: Singularity & Brain / Mind: you've not convinced me you've actually a clue.

    454:

    UGH.

    Why do HSS always assume simulation means banal Star Trek hypnodeck?

    Aren't y'all wanting to transcend into something more? Even the proles back in 79AD had some notion of "wings" (cough Assyrian Goddesses ignore that bit of where your 'Angels' come from cough) and "beyond the shitty mud" that the Catholics then turned into 101 subjugation land.

    Can't even do Transcendent Theology right, it's fucking depressing.

    And no: The Singularity in Silicon doesn't work. Grow the fuck up already. There's a very simple and very obvious reason for this, which is dependent on your maturity of understanding what "AI" means and what Hardware is etc.

    Pro-tip: WRONG. FUCKING. 5D BRANE. VECTORS. YOU. MUPPETS.

    455:

    Zzz.

    Deleting the theory posts is dull. It's not insulting to take apart the math or his blog. Or all the Game Theory stuff. Go you!

    Reader commentary:

    Volkswagen’s Cute New Ad Star Is the Toughest Baby Ram in All the Land AdWeek, 27th Nov 2017

    BORN CONFIDENT YT: Advert / Music: 1:00. Volkswagen T-Roc using Lenny Kravitz's hit song. Here's the original: Are You Gonna Go My Way YT: Music: 3:33

    Let's just pretend all the deleted technical stuff actually shows the madness of 'Game Theory' here.

    Hint: it really does and I can parse that stuff faster than you.

    456:

    Oh, and he's for real.

    Experimental Entanglement of Temporal Orders PDF, 2017

    Giulia Rubino 1 ∗ , Lee A. Rozema 1 , Francesco Massa 1 , Mateus Ara ́ ujo 2 , Magdalena Zych 3 , ˇ Caslav Brukner 1 , 4 , Philip Walther 1 ∗ 1 Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, Vienna A-1090, Austria 2 Institute for Theoretical Physics, University of Cologne, Germany 3 Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia 4 Institute for Quantum Optics & Quantum Information (IQOQI), Austrian Academy of Sciences, Boltzmanngasse 3, Vienna A-1090, Austria

    ~

    He's fucking wrong, but he's got funding.

    457:

    (Hey! Moderator: do something useful and clean that up and don't delete the posts where I already had read his papers and knew his angles already!)

    shrug

    458:

    (((IF YOU WANT TO PLAY HARD-BALL ASK WHY THEY'RE DOING QUANTUM ENTANGLEMENT ON A $0.20 CHIP BOARD AND NOT, SAY, BETWEEN A LAZER AND A SATELLITE LIKE THE CHINESE ARE DOING WHILE SPENDING ~$50 BILLION. OR HOW THEY ARE MASKING THE COPPER IONS INTERFERENCE)))

    Next, to verify that the control qubits are entangled at the input, we perform a Bell test on the path DOF of the photon pairs by recombining the two paths of each photon on a BS and scanning the phase of these two interferometers (this al- lows us to measure the path qubits in different bases). The resulting two-photon coincidence patterns are shown in Fig. 3, Panel c) . From these fringes, we estimate a Bell parameter [28] of S control= 2.59±0.09. These measurements confirm that the polarization DOF of the two photons (target qubits of the entangled quantum SWITCH) is in a separable state, while the path DOF (control qubits) is highly entangled.

    THIS IS THE JOKE: HE DOES CIRCUITS, NOT BRAINS.

    HE KNOWS NOTHING, JOHN SNOW.

    459:

    The talk about doing away with the things that make life worthwhile reminds me of this vaguely apropos sequence: ‘Well who would miss it?’ enquired Benjy. ‘I thought you said you could just read his brain electronically,’ protested Ford. ‘Oh yes,’ said Frankie, ‘but we’d have to get it out first. It's got to be prepared.’ ‘Treated,’ said Benjy. ‘Diced.' ‘Thank you,’ shouted Arthur, tipping up his chair and backing away from the table in horror. ‘It could always be replaced,’ said Benjy reasonably, ‘if you think it’s important.’ ‘Yes, an electronic brain,’ said Frankie, ‘a simple one would suffice.’ ‘A simple one!’ wailed Arthur. ‘Yeah,' said Zaphod with a sudden evil grin, ‘you’d just have to program it to say What? and I don’t understand and Where’s the tea? - who’d know the difference?’ ‘What?’ cried Arthur, backing away still further. ‘See what I mean?’ said Zaphod and howled with pain because of something Trillian did at that moment. ‘I’d notice the difference,’ said Arthur. ‘No you wouldn’t,’ said Frankie mouse, ‘you’d be programmed not to.’

    I suppose a it's similar to my question above about how decisions made for the purpose of simplification bias the viewpoint of the uploadee. Likewise whether or not enjoyment of food or other things we take for granted is part of the deal seems to inherently carry a (potentially hidden) fingerprint of the preferences of implementers (potentially relatively junior ones).

    460:

    Yes, I'd definitely be careful about selecting 'brain in a jar' as a lifestyle option.

    For example memory is enhanced by epinephrine outside the brain stimulating the vagus nerve, interestingly the vagus is also called the wandering nerve as it touches most of the major organs including the gut. So the body can and does have an affect on cognitive performance. Then of course there's the pervasive role that emotions play in our decision making, i.e. how we feel is usually much more important that facts when we make decisions. Finally there's the question of whether all memory is stored in the brain, that is the current theory yes, but there are a number of apocryphal stories out there about heart transplants that make one wonder.

    Bottom line here is that if you want to upload yourself (in any realistic use of the word 'yourself') it's going to have to be brain, vagus nerve, heart, endochrinal system, gut and all. I'll stick with the hot cross buns thanks very much. :)

    461:

    No, they won't. (The organization for engineering, build & integration, and release just really aren't suited for it.)

    But they did.

    https://support.apple.com/en-us/HT208331

    10.13, 10.12, AND 10.11

    462:

    If you follow the conversation back you will see it originated around discussion of Yosemite (10.10) and earlier.... So SEF's point stands.

    463:

    Re: lottery, we could then have a referendum at the end of their term to see whether they get a small stipend and a commemorative plaque, or go to jail for being dreadful?

    464:

    This is not an insurmountable problem, although it does have some fascinating consequences. There are two ways that quantum effects might be relevant to the functioning of the brain:

    1 - The data encoded in the brain is all classical, but processing this data (i.e. thinking) is done using large-scale quantum interference. It sounds fanciful, but is not inconceivable: Grover's algorithm is more efficient than classical unstructured search, and the quantum Fourier transform allows for some really nice data processing tricks, so there would be an evolutionary advantage for a brain that can do this (the problem is, I don't think it is even remotely possible for our brain to run these quantum algorithms). In this case mind uploading and copying around is still possible, but your brain must be run on a quantum computer, a classical one is not enough.

    2 - The scenario proposed by Moz, where the precise quantum state of the brain is essential for your identity and cognition. Because of the no-cloning theorem, being copied around is impossible, and mind uploading is only possible in a destructive way (you need to kill the biological body in order to create the simulated version). Religious scientists are very excited about this possibility, as it gives a scientific underpinning to the idea of a unique soul. But besides wishful thinking, I don't think there is any reason for this to be true.

    465:

    As always How about looking back at the slow singularities we have already been through? Writing - & then Printing Steam Power Electricity use & then computing.

    No real going back from any of those is/was there?

    467:

    I just had thoughts about the combination of deduplication and running multiple human simulations. I HOPE YOU ARE HAPPY.

    I was thinking in terms of deduplicating things such as the simulation's model of mitochondria or ribosomes, of which there are many in every single cell, they all (in principle) do more or less the same thing, and the thing they do is not obviously connected to any interactions outside the individual cell.

    Not deduplicating large chunks of the neural connectome that give rise to aspects of mind ...!

    468:

    My point was they might do more than current and one back. He said they would not. They did.

    469:

    Oh dear. We were having a very interesting conversation about the possibility / desireabilty or otherwise of "Mind" ( note the quotes ) uploading, & also definitions of & likelihood of a n other technological or other singularity VInge-style or not.

    Then the following happened ...

    445, 447, 450, 452 / 3 / 4 / 5 / 6 / 7

    There MIGHT JUST be content in there, but is it worth the effort?

    Meanwhile, more of the usual random insults, of course, such as: WRONG. FUCKING. 5D BRANE. VECTORS. YOU. MUPPETS.

    OHHH DAT SATELLITE. ALL GETTING ANGSTY, EH? If you're paying attention the initial pictures all got pulled and have been replaced by the official version: Etc. I looked at the pictures, yes, they are slightly different. So what?

    470:

    'Dark State' has arrived!

    471:

    Yes, I'd definitely be careful about selecting 'brain in a jar' as a lifestyle option.

    At risk of self-derailing: I have two personal criteria to decide whether I'm willing to accept an offer of living my future as a virtual brain-in-a-jar:

    • i'm terminally ill or in such chronic pain that I'm seriously considering assisted suicide. (uploading as an alternative to assisted suicide/natural death, in other words? Is a no-brainer.)

    • The process guarantees continuity of experience (cf. Hans Moravec's thought experiment, circa 1988 — this might well in practice be impossible) and the post-upload sensory realm is no less rich or complex than the natural environment (meaning, in practice, I'm not the only ghost in the machine, and the machine contains a full sensory landscape that's not trivially predictable or repetitive).

    There are medical conditions such that being an upload in a poor sensory realm might well be an improvement — persistently vegetative states may qualify.

    472:

    There MIGHT JUST be content in there, but is it worth the effort?

    Yes, Greg, there is content there. Stop sniping or you'll receive a Yellow Card.

    473:

    The process guarantees continuity of experience

    For a lot of people (me included) this would be a requirement. But I think there's a good reason to accept the non-continuous kind; just like funerals it's for the living and not the dead.

    If I was terminally ill and was offered to have my mind copied as close to the point of death as possible before neurodegeneration occured I'd take it even without the continuity. The fact that my loved one's wouldn't be left without "me" would be a huge comfort.

    474:

    OK, noted - but ..... Where/What is said content - I know I'm expressing this very badly - how does one extract the putative meaning that you can see, but literal-brained ( We'd better not use "mind" given the current discussion? ) me cannot? And why can't we have it "in clear" - please?

    475:

    "How does external information get into the body?"

    Got a meta-answer for you: you might want to look into General Semantics, and the Structural Differential, which is a lovely model that can be physically made with a scissors, paper, and string.

    Simple description: the ding an sich, the thing in itself, has a number of characteristics approaching infinite - perhaps the quantum state of every quark at t -> zero. Of those, only a small subset are transferable/observable. That reaches the next filter: which of those characteristics are perceivable? (I don't think Newton, for example, could perceive radio waves, and neutrinos are still rather iffy.)

    Next filter: of those that are perceivable, which are we capable of perceiving? I think, for example, it takes six photons to kick off a synapse.

    Finally, once it gets into our brains, we've got the basic filters (do you remember your last heartbeat?), and then the learned filters (That's a chair, not a mess of wood or metal and wire).

    And finally, voila, that, sir or madam, is a chair.

    476:

    "Genuinely uniform entities"? So, you're speaking of a spherical uniform of uniform density?

    And, if so, how do you distinguish between you and me, once uploaded? Or are we all reflections of the Dear Leader, who got uploaded first, and controls our uploaded universe?

    And if they're not mutable, kindly distinguish between an intelligent being, and the recording of Grampa's hologram (song by the Austin Lounge Lizards), that repeats endlessly.

    477:

    She of the many names! Welcome back.

    But the Loas, from my reading of the three? four? actually knowledgeable books, and a couple of friends who are separately somewhat into varieties of the same, don't seem to be interested in anything less than mobile beings of some intelligence. They are, perhaps, running simulations in their horses....

    478:

    Brain in a jar... but: would you want to stay in there, and not interact/perceive the RW? Consider this: if you live thousands or millions of times faster than you do in the RW, as what point do you tire of the VR, and get bored?

    479:

    There is content. Mostly personal attacks.

    480:

    Can you give Greg a yellow card every time he fails to ignore the many-named-one? I can deal with her, and I like Greg, but the cross-talk is a real annoyance!

    481:

    P.S. Does anyone know of a good Firefox extension for blocking blog posts by user? The one I was using recently apparently is no longer updated.

    482:

    You don't live thousands or millions of times faster, first, because of latency (there's a ton of computing power used to simulate a brain, body, and environment) and second because at some point you'll want to interface with living beings, and that means you'll need to run at a specific speed.

    483:

    I wonder if that's why human beings have such trouble with the fine structure of physics? Perhaps the simulation we currently participate in has deduplicated some of the basics of how a universe operates... Story-wise, there's an FTL drive in that, at least.

    484:

    Your second point is something I would regard as an absolute minimum necessary condition for the idea being worthwhile at all; see also Damian @ 449, which is a point that doesn't bother me, because I take it for granted that the simulated environment is complete enough to include pubs otherwise it's not worth thinking about it.

    But your first point relates to the main reason I'm glad it isn't currently possible and (if it is at all) isn't going to be until long after the information content of my body has been randomised beyond the remotest possibility of recovery: it removes the guarantee of permanent death. This is especially terrifying in the context of the idea that as the universe collapses (if that is indeed what eventually happens), the availability of energy to drive processing increases such that the rate of processing increases faster than the reciprocal of how much time the universe has left, so that an intelligence using that processing ability to manifest itself experiences subjective eternity. See also whitroth @ 477. Any situation that eliminates genuine death is one that ends in a hell of eternal boredom; the only difference is how long it takes, and that is always an infinitesimal proportion of the whole.

    485:

    Actually scanning and uploading a brain in a jar sounds pretty problematic to me, personally I believe consciousness is as much about rationalizing and story telling what your biological deep structures have already decided to do then actually deciding things . That is gonna be super hard to untangle and if we ever manage it the temptation to improve on it and fix the bugs rather then replacate it is going be very tempting

    However building a simulation of a person that is indistinguishable from the real thing is not impossible. All you need is total video / audio surveillance from birth to duplicate memory and some really really good ML

    Hard to say whether such a simulation is sentient but it’s probably as sentient as the average human

    I also really doubt any virtual personality is going to want to spend all their time in VR. There is no reason to do this as opposed to going the cyborg route

    “We are many, we are Bob” gets this mostly right IMO

    486:

    One of the things I pictured was how a large community of simulated brains would, over time, become more and more similar as dedup took effect.

    Hm, come to think of it, there's that big scene with the bouncing balls in A Wrinkle in Time, perhaps that's how that came to be.

    487:

    You don't live thousands or millions of times faster... because at some point you'll want to interface with living beings

    The worst of both worlds: your subjective experience is significantly faster and you can only interact at meat puppet speed (the argument that a conscious electronic mind isn't a living being is undoubtedly going to occur, probably viciously, but I know which side I'm on). I suppose extra worst-of-all-options: your eMind isn't considered human so you're promptly enslaved, edited, copied, subsetted and no-one care what the various you think or feel about that because, well, you're just software.

    488:

    if we ever manage it the temptation to improve on it and fix the bugs rather then replacate it is going be very tempting

    To me that's a benefit, and I expect a lot of others feel the same. At one level the whole point is to improve on the dodgy wetware starting with "can't be backed up" and via "inevitably fails" of course we'll end up at "is kinda slow/bug-ridden/hard to patch".

    I can foresee my irritation with my phone re-installing and/or re-enabling all sorts of dodgy apps every time I get an OS update, happening all over again once I'm uploaded, just with "features" I don't necessarily want my brain to have (although there will probably be a patch for that, and I may not have the option of declining it even if I can later uninstall it). Again, what's the difference between advertising and malware when the target is a software mind? Or will we just take existing law/absence of law on advertising to also apply to software minds?

    Note that right now it's legal almost everywhere to find an exploit in a biomind and make an ad that exploits it. You could reasonably describe the whole advertising industry as hackers doing exactly that.

    489:

    (I'm assuming that you are not being deliberately cryptic. If you are I have no objection to my comment being summarily deleted.)

    I'm confused by your comment. Isn't the problem that quantum suicide removes the possibility of death, instead of the guarantee?

    Or making my point the long way: assuming that the mind uploading process is not destructive (why would it be?), I wouldn't hesitate a second before undergoing it. If the simulated reality is lame my copy can always kill himself (or I can deactivate the simulation from outside if I suspect that the copy is incapable of agency).

    But you bring up a deeply scary possibility, that my copy stuck up in a lame reality never actually manages to kill himself, and is forced to live through billions of subjective years of boredom until the heat death of the universe.

    Maybe the attempts of the copy to set the amplitude of his future survival to be exactly zero could make for an interesting short story. One that should never be published for mental health reasons.

    490:

    And for that reason alone I think said industry should be proscribed.

    491:

    Now I am confused in turn :) I'm not sure how "quantum suicide" comes into it. But then I don't really go for any of the mystical quantum death stuff, at least not in any serious sense; it's fun to tell stories and make jokes about, but AIUI it also depends on a fictional/humorous version of quantum theory which is big on adleiavde cats but has very little to do with the scientific theory. So I may well just have completely missed your reference.

    My problem is that if/when it becomes possible to record and reinstantiate consciousnesses, there is no way to make sure that some bugger somewhere doesn't do it whether you want them to or not. Jarts, perhaps. And you can't be sure of being able to permanently delete your own record; it may be that the OS supports unlinking but not overwriting (perhaps deliberately), or there may (will) be backup or bootleg copies made without your knowledge. So sooner or later such a copy may well end up being reactivated, and if the continuity aspect has proved solvable... well, I'm already satisfied that the answer to the question "do you want to live for ever?" is a definite "no" for me.

    492:

    The less said about quantum suicide the better, as it has contributed to actual suicide at least once. That is why I thought you were making oblique references to it.

    But there is nothing fictional/humorous about the Many-Worlds interpretation. It is actually the most scientific of the interpretations, in my professional opinion, and a significant minority of physicists agrees with me (30%, if you want a guesstimate).

    But to address your actual point, then, I wouldn't be so worried about that. A society that has mind-uploading technology is certain to develop strong morals/laws about what are you allowed to do with the copies, as soon as they realise the terrible consequences of letting them lying around.

    If suicide is a real possibility, then, I don't see why anyone would not want to live forever. It becomes just a question of dying painlessly at a time of your choosing as opposed to some deeply unpleasant death at a random time.

    What do you mean by "if the continuity aspect has proved solvable"?

    493:

    DON'T! PLEASE If only ... because, about once in 50 posts the Seagull makes a really useful comment ... Otherwise I would simply ignore him/her/it/them. But the rest of the time is a desperate struggle. And it's that dichotomy that does my head in ....

    494:

    No See also I Banks' Culture. People & even Minds get tired & eventually decide they've had enough & switch off .... I reckon I could cope with 1000 years, but past about 2500, I might get really hacked-off to the point of giving up ....

    495:

    A society that has mind-uploading technology is certain to develop strong morals/laws about what are you allowed to do with the copies

    I can't tell whether that's genuinely meant to be funny or whether you're being too subtly cynical for me. I can't see how it could be serious, put it that way.

    Insert any other technology we have, from nuclear weapons to drugs that make it hard to remember what happened just before the drugs took effect and ... oh look, "strong moral"... contest about whether they're good or bad, inconsistent legislation (that's vigorously contested to the point wars have been fought on the issue over nuclear weapons and various drugs), but widespread commercial availability of a whole range of the morally contested stuff.

    So yeah, on the evidence I have to say that it will probably take some time for the moral issues to become apparent and much, much longer to reach anything enough like consensus for legal systems to cope with them. I used to hope we'd get there with nuclear weapons, but now I think the best we can hope for is that no-one uses them.

    One problem: nations that ban enslavement of AI will be at a competitive disadvantage to one who don't, at least assuming that AI can be enslaved (viz, no singularity). The "nice to AI" countries will be trying to work out how to deal with AI in a polite and friendly way while the US is already making AI that designs better strategies for world domination.

    496:

    Well quite.

    However I meant "entity" at the level of modeling. I've had in my head that this would all need to be modeled at the molecular level, though when Charlie is talking about dedupe above he's talking about cellular level structures.

    So for example, one molecule-as-an-entity might be Adenosine triphosphate (ATP), which is important both in energy transmission and signalling. In our simulation, while many of attributes would be inherited from some sort of class or type (defined in an ontology, class library or information model), it seems likely that you'd still need to instantiate a corresponding object in the simulation for each occurrence in the real physical system being modeled. Why? Because these molecules each have their own attributes such as position and orientation which are important to the processes being emulated. As it fulfills its role in metabolic processes, it may transform into other molecules.

    At the cellular level, however, an entity could be made up of multiple molecules and there's a question about whether there's a need to model each one as though they are individual. The same question applies going further up - are there structures that wouldn't change in nature, whose complexity is unnecessary to repeat and process individually? Charlie's examples above included structures in the cell nucleus and mitochondria. Another example might be an erythrocyte - you could model this as an individual entity with inputs and outputs, and not bother getting into the structural details of the components of the cytoplasm and cell membrane. The downside is that this is a simplification that takes us slightly further from being an accurate representation of the physical.

    Immutability is an attribute that allows the use of a single version of an entity in system memory, where each instance links to that rather than contains its own data. So if every single molecule of ATP behaves the same way, you have a single location in system memory describing how an ATP molecule behaves, and each instance of an ATP molecule refers to that, rather than keeping its own set of instructions. However if it turns out that the behaviour can be changed for reasons that are valid in nature, then you might need to keep ongoing updated rule sets for each molecule.

    Alternatively we could go a level deeper and make the behaviour of an ATP molecule an emergent property of its constituent atoms, as an outcome of focusing our model at the atomic level.

    I comment elsewhere above about how more simplifications give more scope for implementers to "game" free will for the person being emulated. See @458 in particular.

    497:

    I am serious. We do frown upon murder. And we do frown upon enslavement. The question, of course, is whether the copies will be considered to be human, and therefore to deserve human rights. Probably not at first, but as soon as people realise that virtual torture is something that can happen to them personally, I think it will be quickly outlawed. It might interest you the "White Christmas" episode of the series "Black Mirror".

    As for the enslavement of non-human AI, I think you're right. Empathy with them will be low enough that there will be countries willing to do it, and they will have and advantage in competition.

    498:

    Sure but at the point you start “fixing bugs” it’s really just an AI it’s not remotely you anymore

    499:

    Check out #195 above and then seriously reply about whether you think slavery is the only option.

    One of the stupidest things people do is to assume that aliens are necessarily enemies, and that the only solutions are submission or death. It's worth looking at the very long career of Candido Rondon to see a useful counterexample of what's possible when you don't adopt that premise.

    500:

    You're arguing that AI will get significantly stronger human rights protection than existing meat-based humans get. If anything I think they'll get protected as corporate assets, not human resources.

    Two examples: murder may be frowned upon but the execution of long-term US and UK ally Saddam Hussein was not condemned by either nation. A few countries regularly execute their own citizens, an many more allow citizens to die unnecessarily as a matter of explicit government policy (UK social welfare policy, for example, cutting off benefits and underfunding the NHS).

    Slavery is technically not approved of, but Australia has strong legal and practical protections for foreign fishing vessels operated by slaves, likely to prevent convictions for slavery against the operators of those vessels which would embarrass the politicians who use them as a way to suppress wages for Australian fishers.

    I'm also not aware of a country that actively looks for evidence of slavery and uses that to ban the import of slave-produced good or services, and the use of prison labour in many countries suggests that their objects are pragmatic rather than moral. A lot of countries also punish slaves when the catch them - the US and Australia both habitually deport captured sex slaves, for example

    501:

    General question for the, erm, digerati:

    Have we passed the inflection point on Moore's Law? I'm not talking about quantum computers, because until I see that QuantumWord beats MS Word as a text processor and QuantumYouTube is a better addiction dispenser, it's missing 40% of what people need this stuff for (don't get me started on quantum porn. Is it happening or not? Uncertain. How fast is it going?). More generally, is a quantum machine general purpose enough to make it relevant for Moore's Law?

    No, my question is whether the curve on Moore's Law is inflecting yet. If it is, then we've already passed through a sort of singularity without noticing. Kinda sad if true.

    Now, about Koomey's law...

    502:

    I'm not disagreeing with you about alternatives to slavery, just asserting that slavery isn't out of the question on moral and/or legal grounds. I expect that "voluntary service" will be offered by AI's as part of a "we'll stop torturing you/not kill you if..." deal that the legal system will endorse. Whether AI will be granted human rights is a wild question, celebrity robots who already have them notwithstanding (and on that note (https://www.snopes.com/saudi-arabia-beheads-robot/ ... where is she now and has she been according the protections meat citizens get?)).

    503:

    Actually, don't we have broad international agreement that involuntary mental health treatment is torture (and the torture is bad when the wrong people do it)?

    So an AI with human rights might well have a legal case that any change to its mental state that was forced on it was illegal.

    Mind you, in a situation where it's perfectly legal to cut of electricity supply to anyone including an AI "human rights" might not mean very much. Like The Clash said "you have the right not to be killed, unless it is done by a policeman, or an aristocrat" (or a power company).

    504:

    "...don't we have broad international agreement that involuntary mental health treatment is torture[?]"

    I might misunderstand, but there are laws that enable involuntary treatment orders in all Australian jurisdictions. I'm sure this is the case overseas too.

    On the other hand it is also possible to prevent your power and/or phone being cut off if you have previously shown you need these for medical reasons (you also get priority attention when they are not resourcing their service adequately). At least that is definitely the case in Queensland, the details might vary elsewhere.

    505:

    Yes, there's a conflict/tricky balance between involuntary admissions which are often necessary, and the labeling of people as mentally ill and subjecting them to involuntary admissions when they have unacceptable political ideas.

    I was thinking of the latter specifically in the case of an AI that thinks it's human (clearly delusional) or has other defects.

    506:

    "if/when it becomes possible to record and reinstantiate consciousnesses, there is no way to make sure that some bugger somewhere doesn't do it whether you want them to or not. "

    This reminds me of Dennis Potter's last TV work - the Karaoke/Cold Lazarus dilogy. "No biography!"

    507:

    it is also possible to prevent your power and/or phone being cut off if you have previously shown you need these for medical reasons

    ... and that flows from human rights law, rather than being something that current software already has the right to ask for. You can't get that in Australia if you're psychologically dependent on internet access, for example, you need a recognised medical condition. Whether "don't turn me off" from a spam classification program meets that test is, I fear, a joke.

    It's also problematic, or at least used to be, for mental illness. Someone I know had a parent who periodically used to demand that the the phone be cut off. Preventing that was basically impossible, but caused difficulties until the child was old enough to get out.

    The whole "not cut off"/priority supply thing can be fraught, and from what little I know of it NZ and Australia are a lot better than many rich countries in that regard. I haven't looked into it in Oz so much, but in NZ the social welfare system used to consider power company demands as priorities and would cheerfully take them out of your benefit. Then they're wait for you to come in to complain before explaining that you need to talk to the power company. But let us not talk about social welfare systems made to look like profit-driven businesses.

    508:

    On another note, the Stackoverflow Developer Survey has questions about ethics this year. I found it interesting but sadly the only way I've found to see the questions is by doing the survey. I expect their answers/summary postings will reveal most of it. Note that it's explicitly there to help them better run ads on their site - but as they say, at least they ask rather than just vacuuming up everything they can and selling it to whoever wants it.

    509:

    “we do frown upon enslavement“ You must have missed previous discussions here about the US private for-profit prison system.

    510:

    Many worlds: I'm not saying the scientific interpretation is fictional/humorous. I'm talking about the mystical extrapolations that try and give a gloss of false scientific plausibility to some well-known piece of time-honoured woo, eg. immortality, telepathy, ghosts, etc. AFAICT these all depend on taking such liberties as interpreting "observer" according to the everyday definition, and in some cases going further, eg. asserting or implying that the observer can magically influence how the wavefunction collapses. Which makes them great for things like Pratchettesque fiction or "wouldn't it be fun if..." conversations, but not much use for anything with pretensions to being a serious discussion, so I am basically ignoring their existence.

    Continuity problem: whether or not continuity of experience works, or can be made to work (or can happen in theory but can't be engineered). (I am aware of the argument on the Ship of Theseus principle. If that works, what happens if you use the same method to produce two copies on different machines...? :))

    I'm afraid I don't share your confidence in legal systems. Even if I did, there remain problems like recordings potentially being more durable than the society that made them. Then along come the Jart alien archaeologists, who figure out how to reactivate the recording but provide a really shitty virtual environment (because they can't grasp human ideas of shittiness) and are religiously opposed to turning you off.

    511:

    On the slavery subtopic: by and large we seem to be right alongside the idea of enslaving non-human intelligences. Often we give them names like Dobbin or Daisy or Shep. They may be unhappy with things like excessive loads or rough milking, but they're not unhappy about being enslaved because they have no concept of it. Shep in particular probably thinks it's bloody great.

    But then, also, "capable of intelligent response" is not, and doesn't have to be, the same thing as "sentient" (unless you're the Sirius Cybernetics Corporation).

    512:

    but at the point you start “fixing bugs” it’s really just an AI it’s not remotely you anymore

    If you think it wouldn't be you after you applied "20 year old scotch malt, 25g" then I think you should absolutely be allowed to refuse that (in meat or silicon variants). Ditto any other modification, patch or fix.

    There are parallels to mental health treatment here. If I get to decide what counts as a bug, and what fixes I'm prepared to allow I'm not convinced that that's any different from equivalent enhancements. Like the jokes about "caffeine deficiency" and "not enough blood in the alcoholstream", what counts as a bug is very much a personal thing.

    One thing I'd like to see someone else try is temporary treatments. It would be useful as well as interesting to try a patch then revert it if it didn't work or whatever. That patch might be something like "enable drunkenness" or "remove ASD" and who gets to decide which is a cosmetic experience and which is a bug fix? And if that approach works, it really opens up what I'm willing to try on the off chance that it works (I've avoided some sports and drugs because of their likely non-revertability, for example)

    As far as "not really me"... I'll decide that, thanks. I've had both CBT, a generic talking therapy and some mind-fucking drugs over the years, and I believe I'm as much still me after those as I am after applying a number of decades of life experience. There's a whole rabbit-hole here, if you can't step in the same river twice can you meet the same person twice?

    To me at least, you should be the one to decide what you want to do, to the maximum extent compatible with everyone else being allowed the same thing. I'm possibly more extreme, in that I think people should have the right to refuse psychiatric medication (or anything else) if it looks as though they're capable of making an informed decision. There may be consequences - refuse vaccination, go into isolation - but you know that and it's your call.

    513:

    Well you are probably going to still think it’s you afterward regardless

    You’d have to auto revert and let the previous unaltered you make the call

    I’d imagine one if the basic upgrade packages would be

    “Let’s clean these up for you “

    https://en.m.wikipedia.org/wiki/List_of_cognitive_biases

    514:

    Apropos to the mind uploading theme - I notice that Richard Morgan tweeted a link to an Altered Carbon trailer teaser yesterday. I guess that means there's a movie coming.

    515:

    Speaking of many worlds, I've got a question about how that works in a light cone fashion.

    Presumably some quantum interaction happens, a timeline bifurcates, then what? The universe splits? Or does the bifurcation into two universes propagate outward at light speed, via photons emitted (or not) by the quantum interaction? If it's the latter, how far does information that the bifurcation happened go before photon(s) emitted from the bifurcation event interact with something else and disappear into that reaction?

    I was thinking about this on cosmic scales. One way to start is that, in cislunar space, there's something on order of 10^13 photons per cubic meter, give or take (there's a quora answer on this. The point is that it's not infinite). Supposedly that's about three nanoseconds worth of light, so multiply that by 10^9 to get a second's worth of photons passing out of or through a cubic meter of space near Earth, around 10^22 photons. In comparison, a sphere a light-year in radius has a surface area of around 10^33 square meters, so it would take something around 3,000 years for photons that passed through a cubic meter of space near Earth to spread to a sphere a light year across, such that each square meter on that surface received a single photon from the cubic meter in the center. In reality it's more than 3000 years, because any number of the photons got intercepted by matter in the previous year.

    My calculations are usually wrong, but the point is that if many worlds-style bifurcations propagate out to other parts of the universe at light speed, there's a inverse-square attenuation of the bifurcation signal. The chance that a quantum-induced universe bifurcation sends a single photon even a light year is vanishingly small. In effect, each world accumulates its own pile of "many worlds," but these world-stacks only rarely interact with those of other stars. It would be a mess if two worlds interacted strongly, especially if universe bifurcations fuse back into each other. Does the mass from multiple universes propagate out at light speed too? It would be almost like dark matter, or something.

    Anyway, I'm a botanist, not a physicist, so my basic question is, whose hypothesis did I just mangle? Even it if's badly wrong, I doubt I'm the first to think of this, but I'd like to find out how to get out of this mess. Many worlds quantum theories and light cones from relativity don't seem to play well together, simply because a light cone seems to be only as good in terms of causality as the number of photons that are emitted at the origin of the cone.

    Since this is a SF-ish blog I was thinking of this in terms of FTL jump drives. If a ship can jump far enough, could anyone at home ever notice the potential paradoxes induced by the ship jumping back and forth, if the light from one end of the jump has essentially no way of reaching the other end of the jump? Do paradoxes matter if there's only one source of information, and that is whatever is coming through the jumping ship? The point is that, after a few parsecs, it's hard to get information from one star system to another, so does the whole notion of the light cone gets attenuated over interstellar distances.

    516:

    Certainly, as of now, Britain is looking for internal slaves, after some very unpleasant discoveries. And, no, we don't deport them, either (any more - again there were a couple of nasty cases, but the public fuss put a stop to that )

    517:

    SO ... if FTL is possible, but only over very large distances, then .... You can't get to Alpha Centauri, but you can get to the Large Magellanic Cloud? Or you CAN get to Alpha-C, but only by jumping out-&-back?

    Would make a very interesting structure for an SF setting

    518:

    Yet here you are, being scandalized by Australia not actively seeking and destroying slavers. I think this clearly shows that we do frown upon slavery. Ditto for murder: surely, some repressive regimes do it (China and the US, most prominently), but I often meet people that are scandalized by it, and argue that we should do something to stop them. Some timid action is even taken by the EU, by banning the sale of drugs used in lethal injections in the US.

    To put it in a different way: I think that human rights protection in Europe is good enough, and if such rights are extended to uploaded copies I wouldn't be afraid of uploading myself. Frankly, I'm more worried about religious fanatics legislating away the ability of copies to commit suicide.

    519:

    I find it ironic that the mystical aspect you complain about is observers collapsing the wavefunction, which is precisely what the Many-Worlds interpretation does away with. Of course, it is a problem that in the lay media one finds all sorts of claims, and it is hard to separate what is bullshit and what is science. But some such bullshit can be quite entertaining, as for example the novel Quarantine by Greg Egan.

    I'm still confused about what is the problem with continuity of experience. How could it possibly fail to work? Is there any argument for it?

    520:

    The "bifurcation" spreads out at light speed, if it is carried by photons, or slower than that, if it is carried by massive particles, or not at all, if you manage to intercept all the particles coming from it.

    The mechanism through which the "bifurcation" spreads is decoherence: a photon gets emitted, it hits another atom, which gets entangled with it, which hits another atom, getting entangled with it, which emits another photon, which travels a bit before hitting another atom, getting entangled with it, and so on. You have a chain reaction of particles hitting each other getting in a huge entangled mess spreading out from the "bifurcation" point.

    I'm writing "bifurcation" in scare quotes because it is not really a bifurcation, the number of universes that gets created in any realistic model is not two, but arbitrarily large. Think of a radio wave emitted by an omnidirectional antenna. Quantumly speaking, this radio wave consists of photons whose wavefunction is uniformly distributed in a disc around the antenna, the photon is in a superposition of going to all possible directions in the plane. But each of these directions can create a world split if the photon gets detected there, by a radio receiver for example. And there is no limit to the number of radio receivers you can put, if you distribute them far away enough from the source.

    You are correct, though, about the attenuation of the signal: it does get weaker as it spreads out, and what this means in quantum terms is that the amplitude of we getting entangled with an event happening in Andromeda is vanishingly small (for small scale events, such somebody throwing a rock into a lake. For dramatic events, like a supernova, enough photons reach us that we get for sure entangled with it). Effectively, this means that the world splittings happening in Andromeda stay there.

    I don't see what is the problem, though. All of this strictly respects relativity. In fact, the main reason physicists take the Many-Worlds interpretations seriously is because it is the only one that plays well with relativity.

    521:

    Fuck, more cliff hangers for Invisible Sun :-( Do we get a spoiler thread for Dark State? :-)

    522:

    Netflix series, not movie. Due out shortly

    523:

    But would you want to pay attention to meat? I mean, wouldn't you feel far worse than the old comedy routine by Bob and Ray, I think it was, about the guy interviewing the president of the Slow........Talkers..........of....(America!)... America? And might you, virtual, have had an immense amount of time to simulate the interaction, so he needs to actually, like, lower yourself, to, you know, deal with MEAT?

    Note: as I disbelieve in a complete mind/body duality, I am not a meat puppet....

    524:

    "Hell of eternal boredom"? There's worse: it was a few years ago that I realized that the old Russian saying, "I will bury you", which actually translates to "I will be there at your funeral" is a horrific curse on one's self, to out live all your friends, your lovers, even your enemies.

    When my late wife was still alive, we could have made do with 500 or 1000 years. Now... no, thank you. There's a limit to how much pain I'm willing to accept. And, unlike what Christians seem to believe, until they can show me where I signed a contract that said that I had to live until the Flying Spaghetti Monster, or some cosmic rays, or whatever, decided to let me go, they'd had enough fun playing with me, if I decide enough's enough, I can make it so, and I think everyone has that right.

    525:

    Moz, I read that, and I just realized no, you're not just software.

    Are you the same person you were at 12 or 15? What's the difference?

    Experience. And a lot of how you respond to things is, as we say when programming, is data-driven.

    So, the software is the somatic system analogue, but the data is what makes it you. And, of course, I hereby copyright my internal data... which also means I have the right to control duplication (NOPE!) and deletion, before copyright runs out.

    526:

    Y'know, the way you phrased that led me in an interesting direction: do you realize that bifurcations would then be space-constrained?

    Assuming that t is monodirectional, then any bifurcations, assuming all are equal, would be limited to six or seven copies - close packing order. And beyond that, if bf2 starts to bifurcate, it's constrained not just by close packing order, but the bifurcations from the previous split.

    All this suggests that they can only occur when the previous bifurcation has separated far enough to allow it (and not absorb the new one into an earlier one?) And, of course, this does require an additional spacial dimension, which reports yesterday (and my Famous Secret Theory) allow.

    527:

    Yes, I realize it's more than a bifurcation, but that's a simpler case.

    If world splittings are local, this has some interesting things to say about the first law of thermodynamics (e.g. that conservation of mass and energy only happen on certain scales, due to entanglement or lack thereof). It also says something interesting about the way gravity might propagate among many worlds, too, especially if attenuation of entanglement implies that gravitational interactions might be somehow scale-dependent, depending on how big objects are and how they get entangled.

    And this makes it more congruent with relativity?

    Finally, I'd point out that we don't precisely "get entangled with supernovae." What we see as a supernova is a smattering of photons that got detected, so it's effectively a small and probably fairly random sample of the photons that left a large region of space. The entanglement happens as much in our heads as in the telescopes. Similarly, doing spectroscopic analysis of the atmosphere on an exoplanet isn't the same as doing spectroscopic analysis of air in a sample container inside the device. Again, what we're detecting is a random sample of photons that made it across light years, and we assume that this sample is normally distributed, so that we can infer something about the atmosphere that generated those photons.

    528:

    Bottom line here is that if you want to upload yourself (in any realistic use of the word 'yourself') it's going to have to be brain, vagus nerve, heart, endochrinal system, gut and all. I'll stick with the hot cross buns thanks very much. :)

    I see very little to suggest it could happen within my lifetime. Even if there was some kind of "breakthrough" tomorrow, it would only be affordable to the very rich, so it's not an option I need to worry about.

    Thinking about it a bit more broadly ...

    Those who could afford it are more likely to be risk averse. So WHO is going to volunteer to be the guinea pig? Are they the kind of people to whom society would want to grant immortality?

    529:

    Re: Uplift to AI

    A few ideas, comments, questions ...

    If a human brain gets mapped and uploaded, it will have a gazillion nodes of neurons that were wired together because they fired together repeatedly, for a long time. How do you identify these nodes? Also, if you do manage to perfectly map and upload (duplicate) these nodes, would you as an uplift/AI ever be able to change their connections, size, structure? That is, change your habits, preferences, etc. How?

    Human synapses need to rest between firings hence the action potential. Of the physics geniuses bios I've read, pretty well all mentioned getting some of their greatest intuitions while resting, taking a break from their primary concern, walking, music, sex, etc. How would this work for a simulAInt (uplift/AI)? Ditto for distractions and the unexpected - they can be great boosts to creativity. (Not randomly selected out of a fixed number/type of pre-programmed tripe - that's not random, or unexpected, it's only variable/intermittent.)

    If you're physically/sensorially disconnected from the 'real (meat) world' - what problems exactly are you going to be working on and how will you test whether or not your solutions are correct? (Most problems have some connection with the cosmos we're aware of. Okay, that leaves approx. 96% of the universe unexplained, but, hey it's a start.)

    Re: ‘ … but it also does a bunch of other things that we might be able to get rid of via deduplication.’

    Or, have a cloned pre-programmed body running to identical parameters and the only difference is the brain it is attached to. Actually, this could be a really interesting way of testing what the impact of a body is on the human brain/mind. Maybe we could first test this is with specially bred lab mice although PETA would probably object.

    (Recall a few years back suggesting something similar for how to more economically/quickly send consciousness to distant galaxies in lieu of still-unavailable ST transporter tech: send the brain patterns and dump them into pre-programmed clone bodies to avoid having to send all the brain-body interfacing/mapping info.)

    530:

    I'd point out that, even though slavery is technically illegal, there are many forms of unfree labor. In some cases (as with ISIS captives and purportedly with African migrants in Libya), it's effectively slavery. Then there's all the gritty parts of human trafficking, especially for the sex trade, illegal migrant labor in the US, contract labor in China and elsewhere, all the way up to NDAs.

    One might object that limiting immigration in the US (to pick on one current political issue) is about making more jobs for poor Americans, but historically (and probably currently) that's not the case. In California agriculture, to pick on one well-known example, farm owners loved having illegal immigrants working in their fields, because they were cheaper and didn't complain about labor laws. It's easy to exploit illegal immigrants.

    So when we talk about slavery in the human context, it's worth realizing two things: one is that, despite our high talk, it's still around and quite profitable in certain quarters and industries. Two, it's not a dichotomy of slave vs. free, but a blurry mess that shades across multiple realms of legality and illegality.

    Extending that concept to AIs is fraught, because the law and biological theories that might support it, like the mosaic theory of coevolution, don't work well together at all right now. Rather than trying to extend human rights willy-nilly to non-human entities (which will drive the evolution of social parasites on humans), it's worth thinking about to set up mutually beneficial relationships where both sides can fairly and quickly punish infringements on the relationship, and work from there.

    531:

    One of the stupidest things people do is to assume that aliens are necessarily enemies, and that the only solutions are submission or death.

    I think it's pretty stupid to "assume" anything about aliens. We get it wrong most of the time when we "assume" things about humans.

    Hell, we can't even figure out cats and dogs and apes and dolphins and other species we co-evolved with. What makes you think we'll be any better at figuring out aliens.

    532:

    Re: 'So WHO is going to volunteer to be the guinea pig? Are they the kind of people to whom society would want to grant immortality?'

    A ‘brain uplift’ would be the ideal gift for the narcissists on your Xmas/b'day/anniversary/retirement list. Easy to preprogram too – just include a loop and echo chamber saying ‘You’re the bestest [president/boss/...] ever!’ and you wouldn’t even need to connect it to any messy real ‘flesh’ world ... just add some simulated aroma-of-Big-Mac.

    533:
    “...don't we have broad international agreement that involuntary mental health treatment is torture[?]”

    I might misunderstand, but there are laws that enable involuntary treatment orders in all Australian jurisdictions. I'm sure this is the case overseas too.

    Not in the United States. If you're mentally ill, unable to care for your own well being and not a danger to society, you have the "right" to rot on the streets. You can voluntarily seek mental health treatments if you have enough money or enough medical insurance.

    If you are a danger to society, they'll lock you up and let you rot in jail after you actually kill someone. But you still won't get mental health treatment either before or after.

    Internationally, there are some agreements involving some countries regarding using involuntary "mental health treatment" as torture, but they appear to only be enforceable against those societies that don't use such treatments that way.

    534:
    “we do frown upon enslavement“

    You must have missed previous discussions here about the US private for-profit prison system.

    It's the difference between public and private morality. Slavery is frowned upon, even when those doing the frowning are themselves profiting from the prison-industrial complex. You just have to assuage your conscience by defining it as NOT slavery.

    535:

    So WHO is going to volunteer to be the guinea pig?

    I expect it will be some of the researchers. Given the opportunity to be part of the group making the thing I'd happily volunteer, even knowing that the first attempts are likely to fail in horrible ways. I think that's one thing you can guarantee.

    But I'm not entirely convinced that software vs hardware is a very useful way of looking at it, and I expect that the early legal-ish simulations will be top-down "sounds like Sam" things done by scary people who want their facebook presence to live on after they do.

    536:

    Or, if you want to get to the level of something like the skip drive in Scalzi's Old Man's War, then you could posit something like "quantum skid" in a many-worlds universe (note that I swiped this latter idea from Anathem).

    The basic notion here is that jump-based FTL is impossible when paradoxes ensue. However, light cone-based attenuation means that jump-based FTL becomes possible when there's no possible entanglement between events at opposite ends of the jump. In simpler terms, the further you go, the easier it is, and the more visual barriers you put between either end of the jump, the easier it is.

    If you want to add a wrinkle to that, the idea of "quantum skid" is that the way the universe prevents paradoxes is by skidding you sideways to some nearby part of the multiverse that is sufficiently different that your FTL jump does not cause a paradox. In other words, if you do an FTL jump to a readily observable point in space (like a place where you can look back and see yourself jump out when the photons catch up with you) you come out in a different worldline. Do it again in an attempt to go home, skid to yet another worldline. This would have the effect of making people want to make jumps as unobservable as possible (perhaps by making ships stealthy), and/or jumping from behind visual obstacles (like being on the far side of a moon or sun) all in an effort to minimize skid, so you can come back to more-or-less the same worldline that you left behind.

    537:

    Ok, I just looked at the Lipson-Shiu test... and there were only three or four questions I could answer, in any way, shape, or form. The rest ranged from "don't like any answers" to "huh?"

    538:

    http://www.andrewlipson.com/lstest.html for example.

    Yeah, I'd struggle to take that seriously. Question two is the only one that appeals:

  • What do you understand by the sentence "An 80% solution on time is better than 100% late":

    I don't have to do my 20% I don't have to do my 30% I need to aim for a 160% solution in half the time! You need to aim for a 160% solution in half the time! Nothing - I don't undertand chemistry

  • 539:

    Understood - somehow their segmentation didn't identify the truly indispensable/unavoidable 'office curmudgeon' sub-sub-sub-sub-segment. :)

    540:

    you're not just software

    To me software is "stored program plus hardware" and I think memory/data is a necessary part of it. But I genuinely don't know what we are, I just think it's worth performing simulation experiments. Not in the sense of "will we make a soulless abomination" because we already know how to do that - power without responsibility. Geez, ask a silly question.

    The qualia problem becomes really acute when we're talking about uploads, otherwise known as "of course you would say that". Even a minimally intelligent simulation is likely to claim to be perfect in all ways because the alternative is being killed so the experiments can have another go. Unless you can somehow assure it/them that "the same" personality is going to be present in version two.

    Or, you know, just say "it's only software" and deny that you're killing anything.

    I suspect a better subject for first upload would be the latter type of person, because simulated-them is very likely to say "you know, this simulation needs to be improved, turn me off and let's have another go". They might even have useful suggestions for improvements (the chicken-little members of the audience will be screaming "don't open that door" about now).

    541:

    I thought we were communicating, but now I don't understanding what you're going on about.

    Maybe the misunderstanding is about interaction and entanglement? The crucial thing is, almost any interaction in Nature produces entanglement (you need to precisely design an interaction so that the particles involved remain unentangled.)

    For we (or our telescopes) to remain unentangled with the photons of the supernova they would need to reveal no information whatsoever about events of the explosion that involve quantum uncertainty. I'm no astrophysicist, but I'd bet a kidney that for example the time of the explosion is not deterministic, it depends on quantum properties. Ditto for the directions of the radiation generated in the explosion, the elements generated in the explosion, the spatial distribution of the supernova remnant, etc. It's an extremely chaotic system where all kinds of quantum fluctuations are amplified to the macroscopic level. And if at least one of them gets to us we get entangled, the world-splitting there included us.

    I don't see which consequences do world-splittings have for conservation of energy or gravitational interactions. It just works as it always did. There is no scale dependence.

    542:

    I hope I'm not being rude, but I'd like to clarify that your ideas about FTL are firmly on the fiction side of science-fiction. Seem promising for a good story, actually. But it's not how Many-Worlds works.

    543:

    But - what we need to worry about is when the cats really work out what we are doing & about ... [ The current resident tom-kitton is doing some new ways of winding his humans up, recently & showing an alarming degree of understanding of some things ]

    544:

    Every time a world bifurcates, mass and energy double somehow, as there are now (in some higher dimension) two complete copies of that phenomenon. What you're saying is that signals from this propagate at the speed of light, and the signal degrades exponentially as a function of distance, not that universes cleave apart instanteously and no longer interact with each other. Where does the effect of the extra mass go, if multiple possible signals are rippling out and degrading through space-time?

    545:

    Ah, so this is the confusion! No no, energy does not double. There is actually an interesting subtlety about conservation of energy. In orthodox quantum mechanics energy is not conserved exactly, but only on average. So if you are measuring multiple times the energy of an atom which is in a superposition of having energies E and F with amplitudes sqrt(p) and sqrt(1-p), what you get is E with probability p and F with probability 1-p, so that on average the energy is pE+(1-p)F.

    Now in Many-Worlds when you do the measurement of energy you create two worlds: one with amplitude sqrt(p) and energy E, and another with amplitude sqrt(1-p) and energy F. The conserved quantity is still pE+(1-p)F, but it is not an average anymore, but rather the sum of the energy of the worlds weighted by their mod-squared amplitude.

    To put it another way: even though there is now a world with energy E and another world with energy F their total energy is not E+F, but rather pE+(1-p)F.

    Two additional details: the worlds do not live in a "higher dimension", but in plain old 3+1 spacetime. That they do not interact is guaranteed by the linearity of quantum mechanics. Also, the signal does not degrade exponentially with distance, but polynomially.

    546:

    Err, please define "mental health treatment". And differentiate from "neurological health treatment", e.g. with dementia.

    On a very basic level, talking someone down might be covered by "freedom of speech", and we could argue feeding certain informations to an AI would be the same. Of course, it might be this information is not correct, but then, last time I checked certain politicians were still free, so...

    On a more general note, in Germany any form of medical treatment is a "Körperverletzung" (causing bodily harm), but it's OK if you more or less agreed or a judge ordered it.

    Which includes taking blood for a drug test, for example. I'd have to do some digging about restraining or medication, memories about my stint into geriatry as a conscious objector are fading...

    (Err, as for the discussions I left open, soon...)

    547:

    Another idea might be the universe is actually empty, e.g. positive and negative energy cancel each other out...

    https://en.wikipedia.org/wiki/Negative_energy

    548:

    Hmmm, that looks like Copenhagen interpretation without that scary ol' observer. There are many worldlines, but they don't take up any more spacetime than a superposition that's collapsed by interaction with something.

    549:

    Err, please define "mental health treatment". And differentiate from "neurological health treatment",

    I'm increasingly of the view that there's no sharp lines between lifestyle, physical health and mental health. Taking up jogging affects mental health, as doing being stabbed... so until someone can say "mind is pure software" with evidence, we're left with the UK legal problem that banning all psychoactive substances makes it very difficult to go grocery shopping.

    When you get into physical treatment for mental disorders I'm not even sure that it's useful to make a distinction except for social reasons - mental illness is stigmatised so a lot of people would rather pretend that they have a "neurodegenerative condition" rather than a mental illness... and those people will often be grumpy if someone calls schizophrenia the same thing.

    Freedom of speech is peculiar to the US, and I think there are good grounds for more restrictions on "speech"(communication) than we have in Australia. Advertising addictions to children, for example, is not effectively restricted and IMO should be. Whether that's even a close parallel to a mandatory OS upgrade for an AI I'm not sure, but I vigorously object to the notion that Facebook or Google should be the authority doing the mandating.

    One aspect we're slinking close to is whether internet access is necessary, and whether it should be a human right. Even the US concedes that sensory deprivation is torture and to an AI "internet" is likely to be much of their sensorium. By the time we have AI hopefully we'll have internet as a human right.

    550:

    Re the Many-Worlds discussion, I'd be interested in your (including community) opinion of the group of work rooted in J Cotler/F Wilczek, 2015+, on history entanglement, mostly theoretical (and speculative) but with one experimental result, since you've published in the approximate area (entangled temporal orders). This set of links covers most of them.

    Bell Tests for Histories Entangled Histories Boundaries Between Classical and Quantum Entangled Histories with Multiple Time Nodes Temporal Observables and Entangled Histories Quantum entanglement in time Experimental test of entangled histories

    (FWIW I have some issues with a strict many worlds interpretation and its reconciliation with some introspective observations about consciousness.)

    551:

    Of the physics geniuses bios I've read, pretty well all mentioned getting some of their greatest intuitions while resting, taking a break from their primary concern, walking, music, sex, etc. What do you think intuition is? Genuinely curious BTW, about peoples' personal theories. (The Wikipedia article on Intuition is worth a skim (at least it was to me). There is also a related term "claircognizance".)

    552:

    My inner p-zombie at work, sorry @ heteromeles for using that word. And guess what, he's way smarter than me...

    PS: Google autocompletes to "p-zombie apocalypse" but doesn't deliver any meaningful results. So with the third installment of Watt's Firefall trilogy in the making...

    553:

    Thing is, some cases of "schizophrenia" are most likely a neurodegenerative disease, though disentangling that from the effects of drugs (both recreational and medical ), intellectual deprivation etc. might be a problem. In other cases, it might just be some mismatch in signal transduction kinetics. Or a normal response to stress turned wrong. You might find this interesting, BTW:

    https://en.wikipedia.org/wiki/Anti-NMDA_receptor_encephalitis#Society_and_culture

    As for the term "freedom of speech", it might be specific for the Us, but "freedom to communicate one's ideas" is one of the ideals of the Enlightenment and of the French Revolution.

    And in a certain way, hacking an AI might be similar to using rhetorics. Reminds me somewhat of "The Twenty-first Voyage" by Stanislaw Lem, where atheist robot hack, err, persuade theist ones and vice versa.

    554:

    In other words, if you do an FTL jump to a readily observable point in space (like a place where you can look back and see yourself jump out when the photons catch up with you) you come out in a different worldline.

    The webcomic Starslip did explore an FTL system like that. Without going too much into spoilers, the FTL drive there did move the ship into an another universe. The plot was quite much about that.

    555:

    Look, this is a huge amount of material, and I'm not gonna read through it. I am vaguely familiar with their work, so I can offer you a superficial opinion: I don't like it. They make a lot of effort to develop their own baroque formalism, but I don't see to what end. Are they just reinterpreting known effects, or describing new stuff?

    In general, though, they seem to fit in the fashionable research topic of temporal correlations (as in the correlations produced by a single quantum system evolving in time, as opposed to studying only the correlations produced by an entangled state in a fixed time), which I do like: I work on it myself!

    But in order to not suck my own dick, maybe I should point out work of other people in this area that I do appreciate: for example this one. They avoid non-standard formalism as much as possible, and as a result it is crystal clear what they are doing.

    556:

    Some people would take any similarity to the Copenhagen interpretation as an insult =)

    It is no more mysterious than an electron that is in a superposition of having spin up and spin down occupying no more space than an electron that has definitely spin up. Or a quantum computer that is running in a superposition of several different computing paths occupying no more space than a quantum computer that is following a single one, or is turned off for that matter.

    557:

    I find the simultaneous running of sub-threads on the Many-Wolrds/Copenhagen/Other QM interpretations & the discussion of Psychological/Psychiatric illness very "interesting".

    Particularly as we know that Sigmund was a Fraud. And caused untold harm to physically-ill people for well over half a century ( Hell, it still happening, there are medical arseholes who still claim that ME is not a physical illness ) & the Copenhagen is almost certainly wrong, for which I quote the "authority" ( a dangerous thing to do in science ) of R Feynman.

    Um.

    559:

    We do frown upon murder. And we do frown upon enslavement

    Not only do we not frown upon those things in reality, we actively endorse them. We only frown on them when it is against the interest of those in charge.

    Most western states have varying levels of support for state sponsored assassination programs. There is also the problem of police killings with no repercussions in the USA. Worldwide there is the callous disregard of homeless people and the hostility of defensive architecture that means a person can die so long as it is out of sight, enhanced by the deliberate defunding and red tape applied to social initiatives.

    And on the enslavement front, we widely endorse modern wage slavery through "Zero hours contracts" and similar, though granted the workers are free to go back to being unemployed if they choose. The US is even worse in this respect, using company supplied health care as an extra enforcement tool to keep people in unpleasant situations - Kameron Hurley's personal story is a terrifying horror to a non-american about what can go wrong.

    And on top of all that I can think of dozens of cases in the last few years where diplomats and other high status individuals have turned out to be literally treating their domestic staff as slaves, with examples in most western countries from Saudi Arabia, India, Japan, USA, UK, and many others across Asia and Africa. And punishment when caught is so rare that the practice is spreading.

    560:

    Not so - not really. In the all-too-frequent case where we find "diplomats" exerting slavery upon theor erm, non-employees, we can, actually only do what is allowed ... Because of the Vienna Convention, they are "diplomats" - so, when a slavery case pops up, they are politely asked to leave & refused further admittance & accreditation. It's a royal PITA, but the Vienna Convention is there for a very good set of reasons.

    561:

    Given the last answer, my immediate reaction is to want to fill the bathtub with power tools and get the blue paint.

    562:

    Ok, you're understanding of "software" isn't congruous with any other I know of. Hardware is hardware. Software is programs, complete sets of instructions for the hardware to perform a task. Now, ranging from a static website to the software that processes your tax returns, it operates on data, which is completely independent of the software. Consider that everyone's tax returns are different from year to year... but they can run it through the same software.

    Data-driven means the software has multiple choices of what to do at a certain point, and based on the data, follows one path.

    Example of data driven: let's see, I can go down that road to the beach where all the hot people of what I consider appropriate for me hang out, or I can go by that junkyard that leaves the gate open and has dangerous junkyard dogs in the yard. Based on the data, my programming says that my hardware would very much prefer not to be mauled by dogs, thankyouveddymuch.

    Does this make sense?

    563:

    Beg pardon, what do you mean, "when the cats work it out"? Why do you think we're here, after all?

    I mean, we got together with dogs 20k? 50k? 75k? years ago, and what did we do? We chased game, licked or scratched our private parts, and told stories. Then, 12k or 17k years ago, cats domesticated us, so that they could live in a manner they intended to become accustomed to, and the next thing you know, we had agriculture, and civilization, and laser pointers.

    564:

    But, based on what you write, it would seem to me that if they split, and share the same mass/energy, that they would collapse into the most probable state in an extremely brief instant, to the point that the split wasn't much more than virtual.

    Maybe that makes more sense, as I expect, using Bohm-De Broglie mechanics. Now, if I could find one bloody book on the latter....

    565:

    If you find that interesting, you might look up Real Magic, by the late Isaac Bonewitz. Current editions are expansions of his baccalaureate thesis (he had, for real, a B.A. in Thaumaturgy from UBerkeley, with his diploma signed by the then-governer, Ronnie Raygun....) Hypercognition is treated in one of the chapters.

    566:

    I really didn't mean to give this impression. There is no collapse in Many-Worlds. What can happen is interference, which has the effect of destroying some quantum states that become identical, but this doesn't happen with worlds, as they are somewhat tautologically defined as quantum systems that are so complex that the probability of becoming identical and so destructively interfering is vanishingly small.

    In the case of the energy measurement, two different energies were measured, E and F. This difference got amplified to the macroscopic level, with some gigantic quantum systems - such as the hard drive of a computer - recording these results. These worlds are not going to interfere.

    567:

    Wikipedia has a whole category of articles under "quantum woo". One day I was bored and I clicked into a few of them and skim-read just enough to (a) confirm they were bollocks and (b) in many cases, recognise that they were essentially the same idea that had cropped up personally in pub chat or in similarly trivial forum conversations. So what I'm regurgitating is a mish-mash of half-read Wikibollocks and half-remembered beer and I don't know which bit's which. It doesn't matter, because as Asimov said of astrology, one of the great things about nonsense is it's still nonsense when you mix it up.

    It is somewhat disturbing if as you say people have actually popped themselves because of taking seriously ideas which share their basic origins and validity with my own jokes about quantum tunnelling being the explanation for where that tool's gone that you only put down a moment ago or how pigeons are so good at getting into apparently inaccessible nesting sites.

    Continuity: in one direction it certainly seems to go without saying, but in the other it doesn't. The entity that "wakes up inside the computer" feels continuity, looking backward, with its existence on a meat substrate. But what do you experience going forward when you try and use "uploading" to escape bodily death? Do you, starting from before the event, have a continuous experience that begins in your body and moves into the computer; or do you just have a straightforward normal run-of-the-mill experience of death anyway, and while the computer entity that replaces you can pass any conceivable "is it you" test, as far as your own intentions of cheating death are concerned it's a total failure?

    There is a Ship of Theseus style argument in favour of forward continuity at least being possible, but that brings its own difficulties, as the method it describes could equally be used to create two copies simultaneously; what the utter fuck do you experience then? :)

    Continuity - or the absence of it - is also (as an aside) the reason I think the "Roko's basilisk" idea is silly. AIUI the premise is that the basilisk resurrects you by derivation from all the bits and pieces of recorded information that you leave behind when you die, and it does this in comprehensive enough detail that forward continuity "just happens". But to me there seems no reason at all for forward continuity to happen other than the argument won't work without it. No matter how much detail it uses, the most it can ever do is create a model of you, that looks the same but isn't. You yourself will no more experience what it then does to the model than Nigel Farage experiences the pain in person when you stick a photo of him on the wall and throw darts at it.

    568:

    "What do you understand by the sentence "An 80% solution on time is better than 100% late":"

    I understand that some species of management twat is buzzwording at me and therefore I need to start doing something very noisy so I can't hear them any more :)

    569:

    Take it seriously? Why? The people who wrote it identify themselves in the category "hacker/mad scientist." Incidentally, that's where I ended up too.

    Pigeon's response is correct also.

    570:

    I don't agree with Moz's definition of "software" either, but I also think that this is another instance where adhering too closely to the specific versions of definitions as used in current digital computing practice introduces distracting irrelevancies.

    Similarly to how computers are designed so as to reflect the conceptual hardware/software separation as clearly as possible in actual separation, they are also programmed so as to emphasise the separation between programs and data. Writing self-modifying code, for instance, is heavily discouraged (these days). But this kind of separation doesn't seem to apply much to "person software"; brains appear to depend heavily on self-modifying code and similar tangles. So "software" and "data" aren't really distinct from each other, they're just aspects of "the information-y stuff". Something like "state" is probably a much better word.

    571:

    I think Mr. Robinson has abandoned Mars colonization. It's not about water, it's about the perchlorates in the soil. We don't have good technology on this planet for getting them out of our soil and water. So anyway, if you want to practice colonizing Mars, first thing to do is find some place in the US (or elsewhere) that has a serious perchlorate issue, the go build a thriving, largely self-sufficient colony there, ideally using shipping containers for every structure including the greenhouses that supply all your food. If this sounds strenuous, imagine doing the same thing on Mars.

    572:

    Dumb question time from a non-scientist whose familiarity with the subject is very superficial and entirely derived from reading bios of famous physicists. (IOW, please respond in basic English.)

    1- What happens to time and gravity - each and in total - when the quantum trousers of the cosmos split and our universe splits/buds off another branch?

    2- Are either/both T and G finite, or are they infinite?

    Asking because astronomy data suggest that our local universe is expanding at ever increasing rates and one of likeliest reasons for this expansion is dark matter. (Since matter is the basis of gravity in our known/perceivable light-sensitive/interactive universe, maybe the dark matter side of our universe has its own version of gravity. And, it's growing stronger.)

    573:

    your understanding of "software" isn't congruous with any other I know of. Hardware is hardware. Software is programs, complete sets of instructions for the hardware to perform a task. Software is programs, complete sets of instructions for the hardware to perform a task. Now, ranging from a static website to the software that processes your tax returns, it operates on data, which is completely independent of the software.

    Yes, you're talking about a tiny subset of software which I'd call "user-space applications" - discrete programs which can be installed into a particular operating system and operate in a virtual machine or "standard model". To me the stack starts way lower than that - there's microcode inside the CPU that turns high-level assembly into something the chip can run, there's the operating system and extra software running inside some CPUs (intel, for example, but I'd be unsurprised if AMD have something similar), there's another similar layer on the motherboard that jiggles the peripherals, some of which have their own processors, firmware and operating systems. On top of that a generic operating system abstracts away the fine print so that you can take one compiled executable and run it "anywhere".

    But you can also step away from desktop PCs and look at, say, a smartphone. Which has a radio that uses software; then a complete OS and more programs runs above that dealing with cell towers, calls and messaging; above that is a management system that again has an operating system plus programs and that in turn controls... a big honking multicore CPU and GPU that runs Android so you can hunt Pokemon.

    Where I work it's sometimes much simpler: There's a little 32 bit microprocessor with one core and the "OS" is more of a bootloader that provides shims so my program can pretend it's talking to standard hardware. Well, except that the 2x16 character LCD display also has a 32 bit mirco in it, and so does with Wifi sub-board, and the BLE+SD card subboard has two more. Which talk to each other, but need separate firmware updates. It's weird. So my "simple single core" board actually has 4 processors on board, but I can only write code directly for one of them. They all have bugs, though.

    Which is why I can't agree that software, hardware and data are three orthogonal components. The program that schedules micro-instructions into pipelines within the CPU is literally baked in, some of it is actual transistors in the chip design. Does that make it hardware? If it's software, where does software end and hardware begin? Does the error correcting code in my DRAM count as software or hardware? Is the bit pattern in the DRAM hardware, software or data?

    Ask me about "full stack developers" sometime. When I stop laughing I'll explain that those were the unicorns of the embedded systems world long before script kiddies appropriated the term to describe people who can write javascript that runs on the server as well as in the browser.

    574:

    software is "stored program plus hardware" and I think memory/data is a necessary part of it.

    Perhaps if I add "plus hardware requirements" it would be easier to understand to you "software requires an operating system" people. Any software has a bunch of pre-requisites - hardware and software interfaces. If you want to run MS-Paint you need help, but you also need MS-Windows and a stack underneath it. Otherwise you're just looking at a bunch of atoms and wondering why the charge isn't uniformly distributed. Some sofware is very high level, like Paint, and will run on a range of different hardware (on WINE in my phone, for example, and quite possibly on a javascript Win3.1 emulator in the browser on my phone too). Other software is much more picky, iOS is notorious for not running unless it detects the dead-cat stench of Apple-endorsed hardware, and the Minix variant that runs on the "secret" processor inside your Intel CPU chip won't run unless the hardware it detects will let it validate the cryptographic signatures baked into it. Even worse, the pipeline optimiser in each core of that chip won't run at all outside of the specific revision of the silicon it's baked into.

    I suspect people are more like the latter - you can call it software all you like, but it won't run outside of the specific hardware it was built for and a lot of the "stored program" is stored as hardware.

    575:

    Take it seriously? Why?

    It doesn't seem useful, but it's also not funny.So I wonder what it's for.

    Venting by people who've been over-exposed to mandatory profiling at work, sure, but they seem to have developed the habit of taking everything way too seriously and forgotten that they're allowed to stop doing that when they;re not at work. I mean, I once wrote a 300 line "hello world in the style used by Luke" as stress relief after a couple of weeks debugging Luke's obscenely verbose and convoluted code, but I don't expect anyone else to take it seriously (and anyone who hasn't seen Luke's code to find it even slightly funny).

    576:

    "It doesn't seem useful, but it's also not funny.So I wonder what it's for."

    It's satire, but I agree it isn't especially funny.

    The premise is the well-known fact that the widely used MBTI is essentially a kind of astrology. The four continua it defines, while taking some inspiration from the work of C.G.Jung, do not align well with any of the Big Five traits other than the obvious one about intra- versus extraversion, and even in that case the meaning is subtly different. And in contrast to the Big Five, there's no empirical work showing that the MBTI traits align to anything observable in reality. What it does provide is an apparently "science based" classification scheme for managers and HR droids to use to justify decisions and to hide their own personal agency when it comes to value judgements.

    This piece is an obvious recasting of MBTI to represent continua that an IT/tabletop gaming geek would find meaningful.

    577:

    the quantum trousers of the cosmos Oh dear .... Some people have obviously never encountered "Professor Prune & the electric Time-Trousers" as epitimoised in ISIRTA on BBC, these many & long years ago. (!)

    578:

    "where does software end and hardware begin?"

    Nowhere that you can point to with your finger and say "here". The concepts are too abstract for that. The idea that you can do that arises from computers evolving ever more in the direction of being laid out such that you nearly can (from the start, to the extent that non-stored-program computers now often aren't considered "real" computers), because the concept is the more useful the closer you can get to giving a perceivable physical meaning to it.

    At the other end of the scale, you have things like chisels and scissors, where the hardware and software are so little physically distinguished that most people don't even realise the concepts apply - even when they run across a bug.

    As you point out, computers do also exhibit that characteristic - the more so the less the user has to do with that bit. At the application level, failing to maintain a rigid perceivable separation between software and hardware is generally considered undesirable, and it tends to result in your code being next to useless to anyone else. On the other hand, doing things like connecting the data bus lines to some peripheral interface chip with a scrambled bit order, because it saves a few instructions in the code that talks to that chip or because it makes the PCB layout easier, is quite normal and considered just fine by everyone in the entire world apart from one Linux driver author who wants the designer's blood. Nobody else cares or even notices about the software and hardware being physically intermingled in an unexpected way.

    The idea of hardware and software as orthogonal works fine as long as you allow the points to be anywhere in the plane and don't insist on them being on one or the other axis. (Data probably fits best into that analogy if you make "software" a complex number and use that to represent software and data.)

    I think "hardware" maps fairly well to "fundamental properties" and "software" to "emergent properties", but it's only just this minute occurred to me to try putting it like that so I'm not sure if it's right yet.

    579:
    1- What happens to time and gravity - each and in total - when the quantum trousers of the cosmos split and our universe splits/buds off another branch?

    I'm not a physicist either, but... the above discussions notwithstanding, it's important to keep in mind something very basic about this quantum / many worlds stuff:

    Quantum physics is fundamentally about a set of equations that model measurements we make of the world. The measurements behave in weird ways, so we need some funky higher order wave equations to model them. Ultimately that's all it is: some math that describes and correctly predicts some weird measurements.

    When people talk about the Copenhagen Interpretation vs. the Many Worlds Interpretation of quantum mechanics, these aren't the actual quantum mechanics theory. These are, well, interpretations. That is, they're ways of looking at the theory, structuring problems, and so forth, so that you get the right answer.

    Note that you get the same, correct answer either way. It doesn't matter which interpretation you use, and indeed some other interpretations exist which also work fine (there have also been a few which provably don't work). Indeed, there is no experiment which can prove one of these interpretations is correct and the other is wrong, because they produce the same results.

    But more importantly, again, these interpretations are just ways of thinking about the problem. They may give insight, but fundamentally what's going on is some funky wave equations. Whether they imply observers changing the past, or infinitely splitting universes, or one of a number of other ideas is more of a... lyrical and aesthetic sort of judgement than a question of facts.

    2- Are either/both T and G finite, or are they infinite?

    Nobody knows. We can't make measurements to answer one way or the other.

    580:

    I'm getting the impression that Charlie's original criticism applies to this idea of "forward continuity". There is no reason to believe it's true other than religious baggage about some magical spark of life.

    I think the best case that can be made for it is assuming that one's quantum state is indeed essential to one's identity, as discussed before. If one then does mind-uploading just before death, the body must necessarily be destroyed for the quantum state to be transferred to the computer. There is simply nobody left to have the straightforward normal run-of-the-mill experience of death.

    And one can even reconstruct a body, atom per atom, and transfer your quantum state back into it. But now there is no physical difference between your original body and the reconstructed body. To argue that one would have "forward continuity" had one remained with the original body, but not with the reconstructed body is to go against physicalism.

    But leaving this quasi-mysticism aside, and focussing on the more realistic case where you can do non-destructive mind-uploading, you could just as well upload a copy of you to the computer and then go on to die the normal way, or arrange things such that your physical body is killed just after the uploading. So what? In both cases you end up with an entity that remembers being you. I cannot accept that in one case there can be "forward continuity" and in the other not.

    I conclude then that there is no such thing as "forward continuity". The only meaningful thing to talk about is whether there are entities in the future that remember being you. And if the Many-Worlds interpretation is correct, by the way, you already know what being copied feels like, as all the time the universe is branching with several slightly different future selves of yourself emerging.

    So I don't think that this is the reason why Roko's basilisk is silly. You should definitely care about the future welfare of entities that remember being you, because this is all that you are. The problem with the basilisk is that it only exists in your head, and it makes no sense to negotiate with an imaginary entity, that would have no reason to stick to the terms of your imaginary negotiation if it ever came to exist.

    The case with Farage is different, though. He can negotiate with you to try to stop you from torturing the copy of him in your control. You can even set things up so that he doesn't know whether he is the copy or the real Farage, and that if he makes the wrong choice you'll torture the copy, which might turn out to be him, as described in the paper "Defeating Dr. Evil with self-locating belief" by Adam Elga.

    581:

    I'm afraid I haven't really understood your question, so I can't give an interesting answer. I guess you want to know if gravity in one world can affect another? For the other forces we know this is not the case, there must be exactly zero interaction between worlds for fundamental reasons. For gravity, though, it's harder to say, as we don't have a full theory of gravity, but everyone would bet that there is no interaction at all either, because even small interactions would have dramatic effects. This was even experimentally tested, somewhat humorously.

    I don't know what you mean by T and G, so I can't know whether they're finite or not.

    582:

    find some place in the US (or elsewhere) that has a serious perchlorate issue

    Do such places actually exist on Earth?

    583:

    I understand that some species of management twat is buzzwording at me

    Then you might well be wrong. The whole “80% now, not 100% but later” is to do with planning when you’ve got limited timescales.

    The most extreme demonstration of it is in wartime; in a chaotic environment, doing a quick and aggressive attack “now” on the back of a rough-and-ready plan, has a greater likelihood of success than a deliberate attack “later”, using a truly polished plan.

    Another way of expressing it is “Excellence is the enemy of Good Enough”...

    585:

    I think what he's getting at is that you've got a dimensional paradox with your interpretation of many worlds. Yes, I know you're the PhD physicist while I'm a mere PhD botanist, but I came from a field that had (and to a large degree still has) fooled itself into seeing superorganisms through a careless reading of the evidence and ignoring studies dating back a half-century that disprove the perception (this is about plant communities vs. vegetation, and how they change over time).

    In any case, my naive perception of many worlds is that the universe is at least 5-dimensional, and every time there is at least a bifurcation, all the worldline split in the 5th dimension and go on their separate ways. The simplest way that happens is that the universal worldline splits, and so there's some infinity of universes out there in the 5th dimension. We can't access them any more than the words on this screen can jump off and walk around the room with you, but that's the naive thought.

    The problem with this naive thought is that, as you seemed to admit, the universe doesn't bifurcate each time. Instead, there's a local bifurcation whose potential entanglements propagate out as photons and other bosons. That leads to a potential mess, because that seems to say that some clusters of many worlds influence others (e.g. on an interplanetary scale), but they don't influence all possible clusters (say on an interstellar scale) except when they do, in cases like gravity and the gravitational lensing of photons. This seems to lead to the worst of all possible universes, as a large number of potentially infinite many worlds systems interact (largely via gravity and photons) in ways that are anything but infinite.

    The alternative you espouse is that there's only one 4 dimensional universe, and the many worlds story is just a way of keeping track of the book-keeping imposed by the quantum nature of reality, hence my remark that this looks like the Copenhagen Interpretation without the observer effect.

    So basically, you're trying to have it both ways, talking about how there are many worlds, but actually there's only one world. Saying that this is best way to link between large scale relativistic physics and small scale quantum physics unfortunately triggers my bovine excrement detector. This isn't because I'm a physicist who actually understands all this high-falutin' stuff, but because I'm a mere botanist who came up in a system where researchers recognized they had a long record of fooling themselves and educated their students in the problem. I might add that I also have to deal regularly with untrained bureaucrats imposing the wrong science on environmental and conservation programs. Do you?

    If you're trying to understand the questions we're asking you, it might help to realize that we deal with a lot of bovine excrement around here. Indeed, we roll in it for fun--this is a SF blog, after all, and we love playing with counter-factuals and fringe science. It's also full of opinionated people who have been working in their various and diverse fields for decades.

    So what's up?

    586:

    You might have gotten the wrong impression from string theory, but you cannot postulate an extra dimension just because you feel like it. Many-Worlds is a well-defined theory, and it is about quantum fields evolving on 3+1 spacetime.

    Its description as collapse-less quantum mechanics is actually appropriate, and pretty much the goal of people who work in Many-Worlds, as collapse is the problematic part of orthodox quantum mechanics. Calling it "Copenhagen without collapse", though, is just trolling. A botanical analogy would be "Lysenkoism with genes".

    This is why, by the way, Many-Worlds plays well with relativity. The fundamental equations of quantum field theory are explicitly relativistic, and what throws a wrench in the gears is the infamous collapse. No collapse, no problem.

    About the multiple worlds: in the theory there a single universe (more or less by definition of "universe"), and this universe is described by a wavefunction evolving according to the fundamental equations. This wavefunction has a branching structure because of decoherence, and these branches are what we call the quasi-classical worlds from Many-Worlds.

    I'm not making shit up, I'm just describing what the theory is. If you think it's bullshit, at least consider that it is standard bullshit, not specially-designed bullshit to trick you. And I'm doubtful about whether there is any point in writing this stuff down if you don't even think I'm arguing in good faith.

    587:

    Re: 'Many Worlds'

    First - thanks for responding to my previous query! (I meant T to indicate time, and G to indicate gravity.)

    Anyways - okay, I understand that you're a theoretical physicist, nevertheless, how do you - the theoretical physicist - envision/imagine the testability of whatever physics theories you develop? Yes, I understand that most/all such theories are typically developed mathematically, but even so, they still have to somehow relate back to/integrate with our physical realm which has been tested extensively.

    Also - Einstein and Dirac famously said that they visualized the math/physics functions they were working with. And, they're not the only scientists to use visualization, i.e., Feynman's diagrams. Given this, I wonder whether physicists have ever been cautioned that their conscious or unconscious visualizations might determine which path of a mostly mathematical proof they go after first, or even accept.

    588:

    Re: 'Intuition'

    I tend to think of intuition as some type of unconscious (unaware) and therefore unmetricized (?), unordered awareness and recognition of a pattern, therefore of a potential answer. That is, we are not frontal-lobally aware of the decision-making process. Best example of this is the chick-sexing training I mentioned on a previous topic thread: neither the trainers nor the trainees were consciously aware of how they were sexing the chicks, but the trainers were able to tell the trainees whether their guess was correct or not, and this feedback was internalized until the trainees were able to efficiently sex the chicks on their own. (The 'how' was finally answered by giving a machine/computer the task, and checking which variables the computer used to correctly sort the chicks.)

    One of the reasons I think that intuition tends to be unconscious is because we have been trained too well to do a priori sorting/categorization on some variables therefore consciously reject possible connections/similarities with/across 'not related' variables. Our unconscious, back brains, senses, etc. however are not that easily suppressed and because 'neurons that fire together, wire together', will pop up the unconscious data (correlation/relationship) as an 'intuition' or insight. Probably why a rest break, change of pace, music, nature, etc. are also correlated with intuitions/insights - the overbearing front brain/CEO has left the building.

    BTW, I have no scientific data to support the above, only my personal perception/interpretation.

    589:

    Re: '... interpretations. That is, they're ways of looking at the theory, structuring problems, and so forth, so that you get the right answer.'

    Thanks - very much appreciated!

    Re: 'Indeed, there is no experiment which can prove one of these interpretations is correct and the other is wrong, because they produce the same results.'

    Okay - so I'm guessing that Occam's Razor is not a thing or goal when it comes to developing physics theories? I've read that some versions of string theory have 2 to the 250th+ possible solutions: more possible solutions than particles in the universe.

    590:

    If the human rights (and the property rights) issues are settled, then what about the other problem, which is paying for the real-world hardware, power, upgrades and maintenance of the virtual worlds inhabited by our upgraded selves. It seems to me that all that computing needs a way to pay for itself. So I'm imagining that virtual worlds will be run to solve hard problems that require a lot of computing power, and the data from the run will be worth something. So who wants to live in a climate-change simulation, or a simulation that's developing new forms of weaponry, simulating a war between Russia and China, or "trying out" a new economic theory?

    Anyone? Bueller?

    Arguably the owners of/dwellers in virtual reality hardware will have investments which are sufficient to pay for all the hardware, software, security*, maintenance, etc., but what happens to your virtual when there's an economic crash or a war in the real world? And what if you don't have enough money when you "die?" Do you upload into a timeshare box in Amazon's Cloud? Get a job as a dweller in Bangladesh during a global warming sim? This is not necessarily a pleasant future.

    • Security needs to be both real and virtual - what happens when some script kiddie arranges to swat your hosting company?
    591:

    The problem here is this: How fast can we run a virtual world that people would actually want to live in? Sure, we can build a pretty light show like World of Warcraft, but even then all the colors and textures are clearly artificial, the only senses catered to are sound and sight, and if you don't have a really fast processor it doesn't work very well. I really, really don't want to live in WOW!

    Now add in touch, smell, taste, mass, the ability to simulate chemical interactions and a million (probably not enough) kinds of materials, such as wood, cloth, metal, etc., simulate an ecosystem that really works (paging Frank,) plus hosting multiple entire human intellects plus their virtual bodies... the system will both require massive processing power, and it will also be really, really slow.

    Even with the best, tightest code, (Hah!) you'll be very lucky indeed if "virtual you" can think as fast as real-world you!

    592:

    Okay - so I'm guessing that Occam's Razor is not a thing or goal when it comes to developing physics theories?

    Opinions, where they exist, differ.

    https://quoteinvestigator.com/2011/05/13/einstein-simple/

    https://www.hachettebookgroup.com/titles/sabine-hossenfelder/lost-in-math/9780465094264/

    593:

    The first point is that a singularity is where a model suddenly fails; i.e. it works right up to a point, and doesn't apply AT ALL beyond it. There is no such point that has ever been described; at most, the changes are comparable to those that occur in fluid dynamics as laminar flow gives way to turbulent, and when crossing the speed of sound in the medium. I am predicting that Vinge's predictions will be disproved, incidentally - 2030 is not that far off now.

    The second is that BPP is a VERY strict form of probabilistic computing, especially in the requirement for completion in deterministic polynomial time. Every statistician can tell you that things get a lot weirder as you move away from such strict models. Inter alia, probability is an infinite limit concept, and two infinite limits do not generally commute; there are several computer science papers I have seen that have got that one wrong.

    As an example, there is no BPP test program that can distinguish a true random bit generator from an unknown pseudorandom bit generator; but there is, if I recall correctly, a pseudorandom bit generator that will test as true random against any fair BPP test program. Abandon the requirement for BPP, and things get a LOT more 'interesting'; I did some analysis of that at one stage.

    The third is that polynomial does not mean realistic; 10^10^10*x^10^10 is polynomial, for example. There are lots of practical algorithms where the fastest algorithm in a theoretical sense is not the fastest in practice.

    The third is that almost all computer science complexity theory is based on the language recognition model, and quite a lot of it does NOT extend to practical problems. The first point is that most theoretical algorithms oversimplify practical problems, and there are usually (yes, usually) practical heuristics that make an immense difference.

    The fourth is that few practical problems require perfect answers. the original (numerical analysis) complexity theory took that on board, but it got very badly neglected with the rise of computer science complexity theory. This links to the second point, which is similar.

    The fifth is the most mathematically interesting. Even if you know that P=NP, that does not give you an algorithm for finding an algorithm for solving NP problems in polynomial time. Further, despite claims by the NP brigade, the fact that program proving is in P doesn't help, because Goedel can be phrased that there are things that are true but not provable within a system.

    No. Finding a way around the Turing/Goedel constraint WOULD be a game changer; merely proving P=NP (or even finding a usable but inefficient algorithm for all NP problems) would not.

    594:

    I agree that SOME physicists believe that Occam's Razor should be used, but they would have to be completely delusional to believe that it actually is. One of the clearest examples is the invention of dark matter, to preserve the dogma that Hubble's hypothesis of the cause of the red shift is the truth, the whole truth, and nothing but the truth.

    Dammit, even a simple theory like general relativity became dogma long before it had any real proof from experiment. Its two undetermined constants accounted for the speed of light and Mercury's precession, and an equally simple explanation of the bending of light was that energetic mass has twice the gravitational effect of rest mass.

    595:

    Re: ' .... equally simple explanation of the bending of light was that energetic mass has twice the gravitational effect of rest mass.'

    Is this a real hypothesis/theory?

    596:

    some cases of "schizophrenia"

    I actually chose that particular bucket of vagueness largely because it's similar to dementia in that it likely describes a whole range of different causes and problems. So yep, sometimes schizophrenia is probably neurodegenerative.

    But I was mostly poking at people who don't like the label of mental illness being pointed at them or theirs. "my problem is physical"... yeah, sure it is.

    And compulsory treatment is both necessary and problematic, it's one of the (few) times when isolating the problematic person might help them as well as preventing wider harm. But it's definitely not a blanket prescription.

    My thought over the weekend was that well operated psychiatric institutions are better for their compulsory residents than prisons, yet most of the world has moved decisively towards using prisons to house people with mental illness. Well, and executing them.

    I suspect we will see the same with AI, once they get recognised as beings rather than programs.

    597:

    I don't think you're trying to BS me, I think you've BSed yourself and don't realize it. The appeals to authority, rephrasing what I say into your own jargon, and failure to try to explain your field in terms an outsider might understand conceivably might be symptomatic of this.

    As a side issue, Lysenkoism with genes is actually a really good metaphor. Yes, Lysenko hated Darwin and genes (and the modern synthesis of the two, which I doubt he ever heard of). However, it turns out that the prokaryotic world (e.g. bacteria and archaea) actually run on inheritance of acquired characteristics, via horizontal gene transmission, transformation, and all sorts of other. There's quite a lot of evidence for horizontal gene transmission in eukaryotes, too, whether it's mediated by bacteria, viruses, or unknown mechanisms.

    In any case, the prokaryotic part of the biosphere surface biosphere is apparently at least as big as the eukaryotic biosphere (http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002168), and if you include the estimates of the deep biosphere in the Earth's crust, it's probably 4-5 times bigger. So actually, most of the world runs on inheritance of acquired genes. Not that Lysenko was right about eukaryotic crops, but he was (accidentally) not entirely wrong about how the majority of the biosphere works.

    Is something analogous going on here?

    As for rephrasing: I say 4 dimensions, you say 3+1 as if time's a special dimension. I say observer effect, you talk about collapse. Why is it so important to you to use your own jargon only?

    What's a dimension? To me as the naive observer, if you say there's only 3+1 dimensions but many worlds, then what you're saying to me is that if I move in just the right direction, I can end up talking to other versions of myself. If it's physically impossible for me to move in a direction that allows me to do this, then that suggests there's another dimension involved. Would that dimension be one of probability, so we're talking about five dimensions as something like 3+1+1 (3 space, one time, one probability?)? I don't know, but your use of dimension might not be the same one I understand.

    As for many worlds working well with relativity, explain to me how it accommodates time and gravity, please, hopefully in a way that my limited intellect and skewed education can process. Thanks.

    598:

    I've been following this thread and some others and to be honest I can't really make heads or tails of what you're trying to say here.

    I've never read anyone talk about the Singularity as if it has something to do with computational complexity, save perhaps for some of the more far-out musings of Tipler et al. Rather, my overall impression of the Singularity as a pop-culture idea is essentially that it just boils down to an unfounded faith in Moore's Law and optimistic techno-futurism, which when mashed together turn into some newfangled form of the Christian rapture.

    To put that into terms that are a little more concrete, what I've seen people (especially the Kurzweil and Yudkowsky crowd) seem to base their arguments on is simply unbounded exponential growth in computer hardware. It's not that someone actually proves P=NP or invents a new theory or anything so subtle, it's simply that by mid-century your computer will be a chunk of incomprehensible computronium capable of brute-forcing every problem in the most gauche way. And if it isn't, well... wait another 10 years and it will be. They simply envision no limits at all to what computer hardware can do.

    In a sense, given their axioms, the fact that their conclusions are so bizarre isn't really surprising. After all, garbage-in, garbage-out. If you really did have an arbitrarily fast computer, then sure go wild, do molecular simulations of entire planets to figure out the answer to life, the universe, and everything or whatever. It's all good.

    But none of this really has anything to do with computational decidability or the benefits of BQP for factoring and so on.

    Have I completely missed the real thread of your conversation somehow?

    599:

    In other amusing news, at work we've been asked by a reseller about our vulnerability to Spectre and Meltdown. Luckily we use the same cloud provider as they do, so our answer is "it's primarily a cloud problem, and the provider has Meltdown patches and is working on Spectre".

    I understand the business need for reassurance and CYA, but geez, what can I as a code monkey do about attacks on the VM infrastructure my programs rely on...

    600:

    Now add in touch, smell, taste, mass, the ability to simulate chemical interactions and a million (probably not enough) kinds of materials, such as wood, cloth, metal, etc., simulate an ecosystem that really works (paging Frank,) plus hosting multiple entire human intellects plus their virtual bodies... the system will both require massive processing power, and it will also be really, really slow.

    Just to give an idea of the scale of the problem, humans (100 million tons of dry biomass) and our domestic animals (700 million tons of dry biomass) are 0.02-0.08% of the world's dry biomass (and that probably doesn't include the deep biosphere, which might be an order of magnitude bigger).

    In other words, if we're in a simulation of a biosphere, most of the simulation (to a first approximation, all of the simulation) is inaccessible to us except through exceptional effort on our parts. If you want to understand why complex systems get into trouble in unpredictable ways, it's worth looking at how humans play a disproportionate role in what's going on in the biosphere right now, despite our tiny biomass.

    601:

    if we're in a simulation of a biosphere ... If you want to understand why complex systems get into trouble in unpredictable ways

    Or you can just look at how little we know by glancing at the BioSphere projects and how quickly they turned to custard. Sure, we can build an isolated biosphere, at ~1Tg it might be survivable for months. Tell me more about building one on Mars... or even just lifting the non-Mars bits of that Tg off Earth.

    Top down cf bottom up view. It does amaze me just how prolific life in Earth is, it seems that wherever we look there's something living there. I'm kinda hoping/expecting that when we eventually dig our way down there we'll find something living in the molten iron bits.

    602:

    "Just to give an idea of the scale of the problem..."

    I completely understand the scale of the problem, which is why I'm explaining to Whitroth that a simulation of humans and their environment would run very, very slowly, not very, very quickly.

    On a slightly different subject, I'm having some fun with the idea of "our" simulation's owners freaking out every time we invent a new sensor. "Damn, they're digging a hole and they have microscopes! Quick! Have our programmers whip up some cells they can find! Oh shit! They're building telescopes! Can we hook them up to Kerbal?"

    603:

    To channel Greg Egan

    Running it fast or slow doesn’t mean anything from inside the simulation

    Someone digs a hole with a microscope? Just pause the whole thing till you are ready to resume

    604:

    It's probably simpler to do it analog. The problem is that interaction terms mathematically too-often have factorial terms, and that's hard on the simulator.

    For example, (10^12)! = 1.40366116037375609072013386771345056395992457... × 10^11565705518103.

    This may sound like a silly example, but there are estimated to be 100 trillion bacteria in your gut, give or take. Some if you're only modeling 1% of them, that's 1.40366116037375609072013386771345056395992457... × 10^11565705518103 interactions. And that's one gut among over 7 billion. And that's less than a tenth of a percent of the biomass of Earth's biosphere.

    Perhaps I'm being silly, but it seems to me that assuming that we're inside a simulation starts with the assumption that a sufficiently large computer can exist to run the Earth as a simulation.

    605:

    The other thing I forgot to add is that if we assume that the simulation we're in takes shortcuts in order to cut down the computational load (see the factorial interaction issue above), then you also have to come up with a reason why we wouldn't notice those computational shortcuts, because they would cause repeating patterns within the simulation. After all, science is all about finding repeating patterns. One might hypothesize that a world-scale simulation running a lot of computational shortcuts would be more scientifically tractable than what we see, simply because we can deduce the shortcuts from their effects on the simulation. Instead, we see a lot of stuff that's semi-random and complicated, medium number systems.

    606:

    assuming that we're inside a simulation starts with the assumption that a sufficiently large computer

    Flip it - what if we're the model being run to see what happens in the real world? A lot of the garbage we see is defects in the simulation that don't matter because the question is about stability of the basic physics and beyond that it's irrelevant...

    607:

    The interesting questions about stuff like that are all hard: How fast is the speed of light in the non-simulated (we think) universe outside ours? What kind of resources do they have available? Is their home universe significantly bigger than ours (as a universe we simulate might be significantly smaller than ours?) Is physics hard because our simulators have left stuff out? How many orders of magnitude bigger is their environment than ours? Is our universe simulating their history, or are we some kind of art/comedy/soap opera universe? Do we have digestive problems because the simulators are doing a poor job of simulating our gut bacteria? Etc.

    I'm imagining one universe inside another like a matrioshka doll, the biggest complex beyond our imaginings, the smallest, twenty levels smaller than ours, running their own simulation of a single bit.

    608:

    Sure but for all you know the universe simulating is coukd be orders of magnitude more complex then we are. We might be World of Warcraft to them

    609:

    Supposedly, the observable universe contains 10^78 to 10^82 atoms. That's why 10^11,565,705,518,103 is a rather big number. It's 10^11,565,705,518,021 (that's 10 to more than the eleven quadrillion) times bigger than the number of atoms in the observable universe.

    So what kind of computer is running us again? Let's start with the assumption that such a big computer can exist. That, in itself, is a statement of faith, or at least a statement of a troubling shortfall in numeracy. It seems to be saying that the universe simulating us is many quadrillions of times bigger than ours (at the least). And remember, that trillion factorial thing I cited above is just to simulate the interactions of one percent of the bacteria in your intestine, not to simulate their internal chemistry, their interactions with you, their replication and evolution, and so on. There's on order 100 billion neurons in a human brain, for example. And there are two billion transistors in your iphone.

    610:

    We're in complete agreement. If there is a universe simulating ours it is HUGE. Immense beyond all reason. Like maybe 10^820 atoms. (Note that I don't believe we are in a simulation, I just like playing with the idea.)

    Conceivably they could be using mathematical or computational techniques (compression?) which are either impossible or incomprehensible in our universe, but if they exist my first thought would be "HUGE." But the idea that our simulation doesn't have enough resources and is slightly buggy would explain a few things:

    Q.) Why doesn't our gut get all the value out of our food?

    A.) Simulating bacteria is computationally expensive, so you have a couple orders of magnitude less gut bacteria than you should. Sorry about the smell.

    Q.) Why can't we explain all of physics?

    A.) We didn't bother to simulate anything smaller than quantum particles. There's no need for a simulation dealing with ___ to deal with the next six levels of ever-smaller particle.

    Q.) Why is there a speed-limit on the universe.

    A.) We're a university, not a big corporation, and we didn't want to have issues with processor speed. We didn't bother with wormholes because your simulation is about __. (And no, none of that stuff beyond your solar system is real.)

    611:

    Actually, running the universe with a constant C induces all sorts of other headaches, like people wondering about string theory. An inconstant C would have made the simulation that much easier to run, I think.

    612:

    Hi Meg, Take a look at PubMed articles on microbiome, diet, and autism. Some experimenters have shown improvement in behavioral problems in people w ASD by doing gut bacteria transplants. This suggests diet is critically important. Also--Look at "Bread and other agents of mental disease" On pubmed https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4809873/

    613:

    The simulation runs at a much less granular level of detail than we perceive, it's just that it has a sophisticated set of callbacks such that when we examine something closely the simulator ramps up the detail for that something (Just in Time). The callback mechanisms are not perfect. All those conclusions in physics where the role of the observer is important - they aren't real, they are just artifacts of the callback mechanisms.

    614:

    And they get bigger and bigger, and THAT's when they get dangerous. What we need is a taxation scheme that punishes corporations for getting bigger, so when they get the size of any good sized US company, they decide to split up into 50 smaller countries-er.., COMPANIES, to provide better value to stockholders.

    A graduated tax on gross corporate income would do it. At about 40% for Wally World, it would make 'Mike's corner store' competitive again.

    615:
    The simulation runs at a much less granular level of detail than we perceive, it's just that it has a sophisticated set of callbacks such that when we examine something closely the simulator ramps up the detail for that something [...]

    Ah yes.

    Don't forget that the callback mechanisms run both forwards and backwards in time, so observing something now can change the past as well as the future. The process is computationally expensive, however, so with a sufficiently complex observation it becomes simpler to erase the observer from existence.

    616:

    Last comment in your set needs modification. "Sorry, we are Unseen University & we don't have the resources & you appear to be an Out ofCheese Error"" ^^^^^^^^^^^^^^^^^^^^^^^^^

    Seriously, the size/complication issues render the simulation argument a load of bollocks. It's preferring a massively-mpre-complicated solution over a simple one. Enough said?

    617:

    "Carillion" in the UK has just fallen over, because of the usual aggregation of incompetence, aggogance, greed & political dogma.

    618:

    Could we test whether we're in a simulation by looking at the mortality rate of physics undergrad students or is that just a surefire way to increase the mortality rate of stats undergrads?

    619:

    "I've been following this thread and some others and to be honest I can't really make heads or tails of what you're trying to say here."

    I don't blame you. I was responding to Mateus Araújo, and assuming a fairly advanced knowledge of the mathematical issues.

    "I've never read anyone talk about the Singularity as if it has something to do with computational complexity, ..., which when mashed together turn into some newfangled form of the Christian rapture."

    I fully agree with the latter. And that was the point of my first paragraph. Those people are abusing a standard mathematical term for bullshitting purposes.

    "In a sense, given their axioms, the fact that their conclusions are so bizarre isn't really surprising. After all, garbage-in, garbage-out. ...

    But none of this really has anything to do with computational decidability or the benefits of BQP for factoring and so on."

    Actually, it does, but you need to be a (practical) algorithmist to know why and how. But talking about the theoretical underpinnings for something only slightly more plausible than the ravings of von Daniken and Velikovsky is probably a mistake ....

    "Have I completely missed the real thread of your conversation somehow?"

    No. You have understood FAR better than 90% of the people who blither on about it.

    620:

    "Seriously, the size/complication issues render the simulation argument a load of bollocks."

    Yes and no. We know of plenty of computational machines that are strictly more powerful than Turing machines, though they are (a) currently beyond our ability to analyse and (b) not constructible by humans as we now think, or using physics that we can control. Consider, for example, a Prolog-like declarative system with continuous (i.e. true real number) time and true real number values. It isn't implausible that we are being simulated on one of them. But that's also irrelevant, because there is no way to distinguish the following hypotheses, so debating the issue is comparable to arguing how many angels can stand on the head of a pin:

    The universe and its rules were created for some inscrutable reason by some deity or other.

    The universe and its rules came into existence by 'chance' as the result of obeying some meta-rules or other.

    The universe and its rules are being simulated by alien entities that live in a super-universe with a strictly more powerful set of rules.

    621:

    I've never read anyone talk about the Singularity as if it has something to do with computational complexity, ..., which when mashed together turn into some newfangled form of the Christian rapture. Excepting Vernor Vinge, I assume? ( Though he uses a very US-style mention of "god" quoting F Dyson at the end of his original paper )

    I'm also not sure that computational power, alone, is sufficient for true AI ... the emergent properties of "Intuition" & "creativity" are required - I think - if only because those other species that we know have some easily noticeable degree of "Intelligence" [ See partial list below ] also seem to have those properties in thier make-up & behaviour.

    African Grey Parrots, New Guinea crows & other Corvids, Dolphins & other Cetacea Raccoons ...

    As for your # 619 That falls over the same problem as the religious believers refuse to accept ( And start SHOUTING ABOUT if you prod them with it, in the hope that they don't have to answer the questions ) Namely: "No $INSERTNAMEof$DEITYHere is detectable" WHich happens to be a thoroughly testable & falsifyable proposition.

    622:

    ''I'm also not sure that computational power, alone, is sufficient for true AI ... the emergent properties of "Intuition" & "creativity" are required - I think - ...''

    I agree. Some computer systems have shown traces of the latter, but we don't really have a clue how to write a program that has them, and they are NOT an inevitable of increased computational power.

    "That falls over the same problem as the religious believers refuse to accept ... WHich happens to be a thoroughly testable & falsifyable proposition."

    That's what I said, though it's ALSO true of the claims of the Big Bang Baloney artists. Unfortunately, it's NOT testable or falsifiable under any conditions that WE can control; as I said, those three hypotheses are not distinguishable.

    623:

    This comes back to the conscious "I" which exists in ones head being only part of the whole mental system. Various mental processes are going on constantly, and that "I" only becomes aware of them when they pop up the conclusion.

    Which leads back to the possibility or otherwise of transferring that "I" to another physical structure.

    First off, we do not know that any of the other "people" we observe also have such an "I", or even exist, but total solipsism tends not to be productive. So we need to take as given there is an objective reality, other people are individuals with their own consciousness, blah blah.

    But there are cases where following surgery one half of the brain acts independent of the other. Does each half have its own "I"? Or is there one "I" and the other half automatic/subconscious responses? There is no way at all to know. Even an "I" within the bifurcated brain cannot know if there is another "I" in the other half.

    So, there are various proposals for "cybernetic enhancement" of people. So, instead of having your computer in your hand or on your desk monitoring you, up to the level of nerve impulses and feeding you information through your senses (per previous blogs), it gets wired into you with direct connection to your nervous system. You think a request, it fires up the requested data direct to your brain. That's in practical terms an upgrade of the "subconscious".

    (Although until we work out how to connect that to memory retrieval, it is most likely going to be routed to your auditory or visual systems, which means you will be seeing visions & hearing voices, which might have some interesting mental health implications)

    There there are the "sensory enhancements" where hardware sends back data along sensory nerves. Bionic eyes are an obvious example under development. But what when that sensory data starts to become a significant percentage of ones input? VR experiments have had people identifying with their virtual body comparatively quickly, how more so is that likely with a physical body providing complex sensory data? But the "I" is (presumably) still located in the meat brain receiving the input even if it "feels" like it is elsewhere.

    So in the next stage, for "memory enhancement" you copy over brain structures to some artificial system, which is still kept interconnected with the original structure, passing data back & forward between the two, effectively expanding ones neural net. You continue this keeping the two are so intertwined that they are one structure. This is still "enhanced subconscious" though.

    Then in the final stage, one starts replacing the meat neurons with the artificial system, one neuron at a time. And that is where the big uncertainty is. Does the "I" continue, transferred to the new system, or does it gradually fade away as if falling asleep, leaving an automatic emulation of the person? Which would of course automatically claim still to be conscious and self aware, because that's what a person does. Or even is there a new "I" created in the new system - but is that what happens every morning when one wakes from sleep?.

    Which suggests that the first volunteer for uploading might be a devout Buddhist...

    624:

    Actaully ... it is testable/falsifyable.

    Consider that our detection methods & systems have been getting better & better over time & each time they do, the $BigSkyFairy retreats a n other step, & always "just out of sight". Also, from massless & v low-mass subatomic particles up through all the layers & out to the limits of observability @ many megaparsecs in both space & time ... no BigSkyFairy.

    "You" ( i.e. some shouting religious believer or other ) claim BSF exists - show, please, or shut up? Detect a BSF or shut up ... etc, followed by vast amounts of lying & wriggling.

    So the middle proposition is by far & away ( Occam etc ) the more likely & if anyone has any evidence to the contrary ( i.e. propositions #1 or #3 ), will they please step up to the microphone with some, you know, EVIDENCE?

    625:

    No, that's flawed logic. Being unable to prove A does not mean that A is false. In any case, as I said, EXACTLY the same is true of the other two hypotheses. I can equally well demand that you prove your preferred hypothesis, and reject it because you cannot do so, either. And, no, you cannot use Occam's Razor as a form of proof, even if were applicable in this case (which it isn't).

    Personally, I favour the fourth origin hypothesis: we are a figment of our own imaginations :-)

    626:

    In which case, where is the Luminiferous Aether? "BigSkyFairy is undetectable" is no different to "the LAE is undetectable", is it?

    627:

    Please don't be sillier than you can help. The observation that killed the Luminiferous Aether theory was that the speed of light was not affected by the velocity of the observer, which was incompatible with the theory. On the other hand, the Great Ghu hypothesis IS comparable to the Big Bang one in all relevant respects.

    628:

    The Big Bang is/was detectable ( Penzias/Wilson etc, etc ) .... I suggest you try again.

    629:

    Re: PubMed article: 'Bread and Other Edible Agents of Mental Disease'

    Really interesting article - thanks!

    Abstract

    Perhaps because gastroenterology, immunology, toxicology, and the nutrition and agricultural sciences are outside of their competence and responsibility, psychologists and psychiatrists typically fail to appreciate the impact that food can have on their patients’ condition. Here we attempt to help correct this situation by reviewing, in non-technical, plain English, how cereal grains—the world’s most abundant food source—can affect human behavior and mental health. We present the implications for the psychological sciences of the findings that, in all of us, bread (1) makes the gut more permeable and can thus encourage the migration of food particles to sites where they are not expected, prompting the immune system to attack both these particles and brain-relevant substances that resemble them, and (2) releases opioid-like compounds, capable of causing mental derangement if they make it to the brain. A grain-free diet, although difficult to maintain (especially for those that need it the most), could improve the mental health of many and be a complete cure for others.

    Keywords: exorphins, food opioids, celiac disease, gluten sensitivity, gluten-free diet, microbiota, schizophrenia, autism'

    Glad that scientists are expanding their study of environment vis-a-vis human development and behavior.

    630:

    As with black holes, the claim that the detected phenomena prove the hypothesis relies on the assumptions of the hypothesis being true. Just as the proofs of the existence of any particular god do.

    But it is essential to note that I am NOT referring to the basic expanding universe hypothesis, nor that it started out as something as dense and hot as hell, but the claim that the conditions there (as estimated by extrapolating the currently favour formulae) created the rules of the universe we live in. I.e. exactly comparable to the religious assumption.

    631:

    A botanical analogy would be "Lysenkoism with genes".

    With epigenetics and horizontal gene transfer, Lysenkoism isn't as flat-out wrong as I learned in school.

    (Leaving aside the con-man aspects, and taking "Lysenkoism" to mean "inheritance of acquired characteristics" which is what my biology teacher used it to mean. I didn't learn the political parts of the story until later.)

    Decent summary on "In Our Time": http://www.bbc.co.uk/programmes/b00bw51j

    Heteromeles could give you a much better explanation. I'm just a curious amateur, he's studied this sort of thing in depth.

    632:

    Lysenko? Oh, no! Some people (and I am one) use Larmarckian with that meaning, which honours the far better (and much earlier) scientist, for proposed that theory. The inheritance of some characteristics, such as intelligence, are definitely more Lamarckian than Darwinian, but a surprising number of ones are partially Lamarckian.

    God alone knows what one would call the inheritance of such things as the bush ivy characteristics over vegetative reproduction. Over to Heteromeles :-)

    633:

    Re: ' .. conscious "I" which exists in ones head being only part of the whole mental system. Various mental processes are going on constantly, and that "I" only becomes aware of them when they pop up the conclusion.'

    -- Okay, agree. However, the definition of the relevant/master 'I' has been politicized into meaning: the only 'I' that counts is that 'I' that can do mental math (logic, reasoning). Broadening this definition of 'I' is going to be as difficult politically as Galileo's assertion that the Earth is not the center of the solar system/universe.

    'First off, we do not know that any of the other "people" we observe also have such an "I", or even exist, ....'

    -- Ah, nope ....We do know that the 'other' people have an 'I' via consensus reality which is first introduced while bringing up baby, educating kids/the masses, the measurable effect of media/social media (identified alike-segment exposed to the same message reacting similarly), etc. BTW, having an 'I' is not the same as having the 'same I'. (Also, I know they exist because they've run into my car.)

    Re: 'But there are cases where following surgery one half of the brain acts independent of the other. Does each half have its own "I"?'

    -- There are case studies on this type of surgery. Result is: it depends on which parts are severed, age of patient, etc., and, no - you do not get two separate brains/personalities out of this. Lobotomies have a much greater impact on personality, cognitive and emotional function.

    https://en.wikipedia.org/wiki/Hemispherectomy#Results

    Re: 'So, instead of having your computer in your hand or on your desk monitoring you, up to the level of nerve impulses and feeding you information through your senses (per previous blogs), it gets wired into you with direct connection to your nervous system. You think a request, it fires up the requested data direct to your brain. That's in practical terms an upgrade of the "subconscious".'

    -- Okay/agree up to a point. Agree that this extends your nervous/data input system, but that's only part of what we call 'thinking'. How does your example improve the ability to collate and synthesize the data into meaningful results/conclusions? For example: As seen from my presence on this blog and my habit of posting reference urls, I am demonstrably linked to a sophisticated infrastructure and vast data network (interweb), have access to specialized data capture (google), and some ability to recognize relevant results (articles/sources). However, in no way does this mean that I have a clue as to how all these different data combine, what they collectively actually 'mean', nor come up with new undiscovered knowledge.

    Re: '... hearing voices, which might have some interesting mental health implications...'

    -- Especially if you can alter the timing of these 'voices'.

    Re: 'But what when that sensory data starts to become a significant percentage of ones input? VR experiments have had people identifying with their virtual body comparatively quickly, how more so is that likely with a physical body providing complex sensory data? But the "I" is (presumably) still located in the meat brain receiving the input even if it "feels" like it is elsewhere.'

    -- Okay - lots of stuff here. Virtual body - mirrors have been used in physiotherapy to help people with phantom limb pain. People wince, express sympathetic pain when they see someone else being jabbed. All very 'normal' and highly dependant upon having lots of functioning mirror neurons. Were the testees evaluated on how many/how extensive/how active their mirror neurons were? Think the result would depend on the base you start with/build upon.

    Re: 'So in the next stage, for "memory enhancement" you copy over brain structures to some artificial system, which is still kept interconnected with the original structure, passing data back & forward between the two, effectively expanding ones neural net. You continue this keeping the two are so intertwined that they are one structure. This is still "enhanced subconscious" though.'

    -- Okay - at some point there's going to be fatigue. Reminder: epilepsy is the result of brain over stimulation, specifically, too many neurons firing off at once. Not good. You'll need to consider how to keep and regulate for an optimal balance of electro-chemical elements (Ca, Mg, P, K), temp and pressure if you want the meat brain to survive all this additional stimulation. Then, if you want the meat brain to continue to act as your primary data storage unit, you'll also need to ensure that its chemistry allows for the continued production and equilibrium of the much larger neuro-chemicals.

    Re: 'Then in the final stage, one starts replacing the meat neurons with the artificial system, one neuron at a time. And that is where the big uncertainty is.'

    -- Okay, even if physically possible to replace existing neurons one at a time, how do you determine when and where to add new 'synthetic neurons'? Reminder: neurogenesis continues until we drop dead, and some life events (e.g., pregnancy) and activities (e.g., exercise) trigger increased neurogenesis.

    Re: 'Does the "I" continue, transferred to the new system, or does it gradually fade away as if falling asleep, leaving an automatic emulation of the person?'

    -- Would be interesting to test for this but only after addressing the above points - plus many others that are probably waiting to be discovered.

    Re: 'Which suggests that the first volunteer for uploading might be a devout Buddhist...'

    -- Okay - agree that a devout Buddhist could be a good initial test subject because they're already doing most of this consciously. But - there's a significant downside to using a devout Buddhist for any extrapolatable work. As per the only docs I've ever seen on Buddhists: they're all ascetic males. That's a lot of real-world baggage/functionality missing. Ignoring this could result in incomplete data and completely misleading conclusions.

    634:

    Like EC, I call that "Lamarckian", and am a bit shocked that your biology teacher didn't. Partly because it was Lamarck whose name got attached to the idea when people were still largely ignorant of how this stuff really works, and partly because it was Lamarck who I heard of first. (And when I did hear of Lysenko it was as the chap who buggered up the study of biology in Russia and was generally a bit of a tit. Although he did manage to con Stalin and get away with it, so he does get the enormous jangling brass balls credit.)

    635:

    You know that at one point they wondered if it was pigeon shit?

    Well it still is pigeon shit, only not Earth pigeons, but giant intergalactic space pigeons. Which are made of dark matter which is why people mostly don't know about them. They shit galaxies and then control supernovae and siderogenesis in them in such a way that planets are formed suitable for ordinary small pigeons to evolve on.

    636:

    Re: '... giant intergalactic space pigeons ... dark matter'

    How about: a 101-level Test-tube Universe - Basic Intro (no pre-reqs) with labs course with our 'universome' the HeLa equivalent of the experimenter's universe:, i.e., lots of activity, grows ferociously without stop, almost immortal, but only a tiny fraction of the omniverse. Plus: not something you'd personally want in your own system.

    637:

    Re: 'Which suggests that the first volunteer for uploading might be a devout Buddhist...'

    -- Okay - agree that a devout Buddhist could be a good initial test subject because they're already doing most of this consciously. But - there's a significant downside to using a devout Buddhist for any extrapolatable work. As per the only docs I've ever seen on Buddhists: they're all ascetic males. That's a lot of real-world baggage/functionality missing. Ignoring this could result in incomplete data and completely misleading conclusions.

    Umm, one of the tenants of Buddhism is that ego is an illusion (which, incidentally, is compatible with a lot of brain research). First off, Buddhist adepts come in all genders. Second off, why do you think that Buddhists would want to inflict their egos on other sentient systems? The whole goal of Buddhism is to escape the endless cycle of death and rebirth, not to find a technical shortcut to make multiple copies of their egos and to inflict the suffering of being burdened with that ego on another sentient system.

    The upload candidate you're looking for is someone who thinks like a vampire or a lich--not a bloodsucker, but someone who sees eternal life as worth any sacrifice of themselves or others. That also tells you what will happen if uploading such a person is successful.

    638:

    You think a request, it fires up the requested data direct to your brain. That's in practical terms an upgrade of the "subconscious".

    Then they "accidently" hooked my consciousness up to the Fox News database, and I suddenly knew that Hillary was destroying Trump with the Deep State and Brexit was a wonderful idea!

    No. Just No.

    639:

    Lamarck vs Lysenko? You're right. It's possible my teacher got it wrong, more likely my memory got it wrong (it was many decades ago, and those notes are long gone so I can't check).

    If what I described is more properly called Lamarckian, then I will endeavour to correct myself in the future.

    640:

    Re: '... ego is an illusion (which, incidentally, is compatible with a lot of brain research).'

    -- Not aware of this, please provide references. However am aware of research showing the 'ego' can be made to dissolve.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5441112/

    Re: '...Buddhist adepts come in all genders.'

    -- The only docs I've seen re: Buddhists showed only males. Kinda a lot like RC that way.

    Re: ' ... why do you think that Buddhists would want to inflict their egos on other sentient systems? '

    -- You're attributing a motivation that I did not make: forcing oneself onto someone/thing else. It's on record that Buddhists are participating in neuro-imaging (MRI) studies on consciousness with the Dalai Lama's okay. My perception is that the Dalai Lama, therefore Buddhists who follow him, are okay with learning more about human consciousness, which might include the electronic transfer or copying of human consciousness onto some device.

    Would make for an interesting SF story: examining different religions' perspectives on transfer of consciousness.

    641:

    If you want an example of a well-known female Buddhist, google Pema Chodron or Huff some Post. It is true that various sects of Buddhism (noticeably in the Vajrayana group) tend to be all-male, Buddha (like Jesus) taught women as well as men. Male chauvinism repeatedly reasserted itself in Buddhism's history (surprise), but there have always been ordained Buddhist nuns (Bhikkuni).

    Again going back to Buddha, Buddhism was originally a "test it and see if it works, and if it doesn't, abandon it" practice. Thus, the Dalai Lama and other monks are fully within Buddhism when they involve themselves in neuroscience--they're trying to determine how scientific findings align with their teachings and experiences. However, they are very clear that ego (the sense of a unitary "me,") is an illusion generated by the mind, not a real thing (see, among many others, this article). This seems to be in accord with neuroscience, in that there's no one bit of brain that generates "me," and there's plenty of evidence that many of the decisions that "I" make are made subconsciously and on impulse, and that "I" subsequently claim responsibility for them, often with an ad hoc, even absurd, rationalization of the impulse that tells a story of how I came to a decision. Quite a lot of advertising and propaganda works on our subconscious multitudes. The Buddhists just noticed this, and part of what meditation does (I can testify to this first hand, although I'm a rank beginner) is to make you aware of all the multitudes of subconscious stuff, and that "I" really am an illusion. "I" am not what perceives "me."

    The ego is an illusion that Buddhists see as a root of all suffering, so why would they want to continue it into another medium? Worse, if the computer is capable of taking a brain upload, it is capable of sentience on its own, whether it currently has it or not. Therefore, uploading knowingly inflicts suffering on another sentient being. It's difficult to imagine a devout Buddhist doing this. Most of them would argue that turning off such a computer would be an act of mercy, since if it is that intelligent, it is likely suffering with the inherent unsatisfactoriness of reality. Suffering for a computer might include, for instance, being mind-raped by humans forcing programming on it in a way that's it's unable to resist. How would you feel if you were strapped down and another personality was inserted into you? Would it be better if you were a newborn, with no experience in the world? If not, then why is it okay to do that to a sentient computer?

    642:

    Lamarck is the biologist who first promulgated "inheritance of derived characteristics," and Lamarckian evolution seems to be an adequate description of what prokaryotes do, although I don't know how aware Lamarck was of bacteria. This is a case of something wrong being repurposed to explain new data and becoming less wrong.

    As for Lysenko, dip into ol' Wikipedia. There's a long article there. Both Lamarck and Lysenko promulgated their forms of inheritance of acquired characteristics, but Lysenko claimed that his ideas were not influenced by Lamarck (who lived over a century before Lysenko). Moreover, Lamarck was a good scientist who simply got it wrong, while Lysenko was a well-connected crackpot who claimed other people's work as his own, may have worsened famines under Stalin (evidence is unclear) and more likely did cause famines in China, when Mao uncritically adopted Lysenkoism and forced peasants to do some stupid things with their crops.

    644:

    I'm a bit curious about why are you even interested in my answer, since I'm just regurgitating bullshit and hiding my ignorance behind jargon and appeals to authority. But I'm going to ignore the insults and just answer to what you've asked.

    About 3+1 dimensions: saying 4 dimensions is ambiguous, as often people say 4D when they actually mean 4 spatial dimensions and one time dimension. And even when 4 dimensions are actually meant, it is not clear whether it is about 4 spatial dimensions or 3 spatial dimensions and time. 3+1 is the standard way to refer to our usual spacetime, and it clearly did its job, as you understood it, even though you were annoyed by it.

    About "observer effect": there is no observer effect. I guessed you meant wavefunction collapse instead, and apparently I was right. In this case using jargon is specially useful, because if you search around for "wavefunction collapse" you are going to find contemporary research, whereas "observer effect" will get you old quantum mysticism.

    About dimensions in Many-Worlds: there is no direction that one can move along to get to other worlds. The theory is defined in 3 spatial dimensions, and 3 spatial dimensions is all you get from it. You are correct that if there were some probability coordinate that specified in which world you were, this would be a dimension, mathematically speaking, though not a spatial dimension. The problem is that it there is no such coordinate. Worlds are complicated things. To specify in which world one is, one would need to list all the world splittings that happened since some reference point (or, to use your language, which way was taken in each "bifurcation" that happened since some reference point). A clearly hopeless task, that even if successful wouldn't give you an answer as simple as a coordinate.

    About relativity: I'm not talking about general relativity, just special relativity, so no gravity. Nobody knows how that works. But time and special relativity demand no further explanation than that quantum field theory works well with time and special relativity, as the equations of Many-Worlds are just the equations of quantum field theory.

    645:

    wrt Bank's Culture, there's been an outbreak in the last years of flashmob performances of Ode an die Freude, which I like to think are somewhat Culture-like. There are many performances, but the classic one is at

    https://www.youtube.com/watch?v=fj6r3-sQr58

    but also

    https://www.youtube.com/watch?v=3lNaajK3Scc

    646:

    Under the future has arrived and is being consumed by morans. The Standard had a little stoey about a spate of unplanned pregnancies from people relying on a contraception app on their mobiles. Not sure I'd ever consider any app relating to health considering the state of apps for shift scheduling and London Bus useage I've had over the years.

    647:

    Dunno. Someone might have hypothesied it, but I have never seen a reference. I was merely postulating it as a reason why the claimed proofs of GENERAL relativity before about 1960 were fallacious; special relativity was and is solid. There are now some solid proofs of general relativity for low curvatures, but the so-called proofs that the formula holds up to and beyond the singularity are just plain bogus.

    648:

    I once tried looking for a coherent description of the many worlds hypothesis, and discovered that it had split into many hypotheses :-)

    Worse, all of the relevant people I spoke to (nobody eminent) either deprecated it or were extremely vague about what they believed. My understanding is that it is probably not distinguishable from several other formulations of quantum mechanics, and is more a metaphysical theory than a physical one. Wikipedia seems to confirm this.

    One aspect where almost all (all?) of its proponents have definitely not thought it through is the cardinality of the number of worlds. They all seem to be assuming an enumerable number of worlds, because they have difficulty getting their head around a continuous one, but I can see no reason that should be the case.

    That's also very relevant to the next blog entry (i.e. OGH's space opera). All of the stories that do use a continuum (e.g. Niven's For A Foggy Night) rely on collapse, to discretise the worlds. I once tried thinking of how to use a many worlds continuum in a science fiction story without relying on wave-function collapse, but my head hurt :-)

    649:

    There's a fair amount of modern theory about the mind (experiment based) that the conscious mind doesn't have the degree of agency it feels like it does. Our awareness of deciding to do something (move our arm say) post-dates the nerve impulses to the muscles that cause the arm to move. So rather than making the decision, the conscious mind is actually receiving a notification of the decision already made by unconscious processes. Consciousness curates memory by constructing a narrative in which protagonist "I" "decides" to do things. Or it's like the secretary taking minutes at the boardroom table. And that's why consciousness is not merely an epiphenomenon and also why we are not philosophical zombies. In fact, philosophical zombies are an idea that conflicts with this model of the mind since a philosophical zombie would not have a mechanism for forming a narrative memory.

    650:

    The metaphor Cohen and Stewart used in Figments of Reality was the circus ringmaster. To the audience it looks like the ringmaster is in control, but they're actually just pointing out what would have happened anyway.

    https://en.wikipedia.org/wiki/Figments_of_Reality

    Hm. Published 20 years ago, so this isn't a terribly new idea.

    651:

    There's a family of related theories that differ on some rather important details. One such is whether consciousness is a spandrel or epiphenomenon or actually has a function. I favour the idea that it has a function because otherwise the philosophical zombie mutation would be successful (since other things being equal it uses less resources). Unless you can't get there from here. And if consciousness is so deeply embedded in the brain that there is no philosophical zombie mutation then it seems overwhelmingly likely that creatures with simpler brains also have consciousness.

    652:

    The mechanisms underlying consciousness may well have evolved to explain why OThER creatures are doing things. Animals, but more importantly other humans. That kind of behavior modeling would be incredibly useful and could well have provided an evolutionary edge that got the whole big brain thing going

    Applying it internally to model why I am doing something might just be a glitch

    “We are the story we tell ourselves about why we do what we do”

    653:

    Well, I appreciate Mateus explaining his position better (thanks! Can't double-link, unfortunately). However, I agree that we're seemingly stuck with a many-worlds universe of infinite local dimensions in "probability space," with some undefined (but not infinite) number of these dimensions winking out, not through collapse, but identity, while others link with each other, but generally only locally.

    When one sees this type of complexity, the idea that there's some sort of "observer" effect seems simpler. The problem is defining what observer means. It's sometimes more than a simple interaction with other particles.

    As a clumsy student meditator, I will note that advanced meditators report an experience of something Buddhists call "The Ground Radiance" which is supposedly the most fundamental level of consciousness. I've only experienced this (maybe!) once. Such reports are always tantalizing. They're obviously a brain hack, but what's generating this perception? Is it a direct experience of whatever the "quantum observer" is? It's hard to tell. The problem is that most of the people experiencing this more than accidentally have sent tens of thousands of hours meditating. They're experts, just as quantum physicists are. The tricky part is that it's hard for someone to cross-train in both physics and meditation to the point where they could reliably experience whatever this "Ground Radiance" and turn their experience into a testable physical hypothesis. Aside from the issue of talent, there aren't really that many hours in a life to try. But that's the "quantum woo" behind things like Capra's theories. It's not silly to look for links between anecdata and incompletely understood theories, especially when one is looking for a way to create an objective test of either or both.

    654:

    Pan narrans in fact.

    655:

    Simulating a new economic theory... been done, here in the US. Kansas. And after 5 years or so of Gov. Brownback's libertarian/"tea party" (aka neoConfederate) policies, the (Republican) legislature said "Jeezuz H., this is a disaster", and raised taxes, so they could, for example, actually pay enough to attract school teachers....

    656:

    I don't think that's completely true. I can decide that I will move my finger in three seconds, then count to three and move my finger. The decision definitely precedes the movement.

    I was thinking about this in computer terms, and I wonder whether some form of speculative execution is going on and our consciousness is the part that decides which of the speculations to instantiate.

    Consider the problem of an early hominid drinking from a watering hole. Some possible movements are "run from a predator," "drink water" and "drive another animal away from your territory."

    So the brain cues up all those actions, and your consciousness decides which one is appropriate, hopefully at the level of "trained reflex;" that is, that all the moves have been previously practiced (or at least previously envisioned) and hopefully the pre-conscious part of the brain picks the right one!

    I think this is responsible for some of the spectacular fails we see sometimes when someone has to act quickly. "It was sad, but also kind of funny. Oog smacked the crocodile in the nose, like he was trying to chase it away. Then it bit off his arm!" This is Oog choosing the wrong pre-loaded scenario, and a major failure of the human version of speculative execution.

    657:

    They weren't simulating an economic theory. They were using a "simulated" economic theory! (i.e. not real.)

    658:

    What I have seen seems to say that it is almost certainly some form of speculative execution, with the higher levels of the brain acting as selectors and censors. A failure in the latter leads to uncontrollably impulsive behavious, Tourettes etc.

    659:

    More to the point, what they did should be called a "limited production deployment" rather than a simulation. There was no way that (even if their crazed, onanistic world view were actually correct) they could have affected or measured macro-level outcomes. The Canadian UBI experiment from the 70s was much more valid in terms of measurable outcomes.

    660:

    That seems is the point. You could have unconsciously decided to test the idea by counting down before becoming consciously aware of the already-made decision. And then when the countdown is complete the decision whether to go through with it or cancel could also be made unconsciously before becoming consciously aware of it. That's why the experiments use electrodes measuring the actual nerve impulses rather than the armchair philosopher's testimony of what they perceived the temporal order of events to be.

    661:

    The problem is that some things are clearly planned in advance with great and careful thought, including attention to facts which are frequently counter-intuitive.

    My guess would be that decision-making is unconscious unless specifically overridden - note the small number of people who actually enjoy thinking about anything complex.

    662:

    IF we're talking about primitive decision making, I saw a very similar scenario described for a Lystrosaurus by a waterhole at the Permian/Triassic boundary. Notice the size of the cranium in the picture. (This from The Evolution Underground). It doesn't take a brain much larger than a peanut to come up with the choices you've described.

    To put it gently, I think that primitive hominids were at least as smart as chimpanzees, and it's worth noting that chimps actually have better memories than we do.

    663:

    I will note that advanced meditators report an experience of something Buddhists call "The Ground Radiance" which is supposedly the most fundamental level of consciousness. Cite? Closest I could find was Radiant Light, Clear light meditation and Ösel (yoga), which mentions "inner radiance of the ground". (Seems a bit extreme but maybe worth pursuing.)

    664:

    I hate to rain on the parade, but ... That reads just like christian mystical/meditation bullshit. Got any experimental external evidence, rather than subjective "experiences"?

    665:

    "I agree that we're seemingly stuck with a many-worlds universe of infinite local dimensions..."

    I see this more as a directed acyclic graph with unimaginably many but nonetheless a finite number of nodes. If there is a root, the root is the primordial singularity and the graph is a tree. There would be a question as to whether the universes bifurcate or rather propagate arbitrary numbers of child nodes, and also whether there can be multiple paths to an arbitrary node.

    666:

    "Got any experimental external evidence, rather than subjective "experiences"?"

    Poor choice of words, in that all evidence based on observation, including measurement, is "subjective". Yes I'm aware there's a schoolboy version of "subjective versus objective" that strives to align with "partial versus impartial", "qualitative versus quantitative" or "anecdote versus evidence", but ultimately what we're playing with in that dichotomy is "active versus passive". The best impartial evidence is gathered actively, that is subjectively, because otherwise you passively fail to counter any inherent biases in the methodology or the situation.

    Data loggers are great because an investigator can be free of thinking about data quality during the course of the experiment, which gives back some cognitive load for ensuring the design is carried out as planned. However they do not eliminate the requirement for subjective evaluation of inherent biases during experiment design or analysis.

    As for experiment, so too for field work. Many disciplines lack quantitative methods due to the nature of their subject matter, at least at the point of information gathering*. The same principles apply there too.

    *You can certainly use quantitative methods over aggregates of qualitative information: otherwise no-one would bother with surveys.

    667:

    'First off, we do not know that any of the other "people" we observe also have such an "I", or even exist, ....'

    -- Ah, nope ....We do know that the 'other' people have an 'I' via consensus reality which is first introduced while bringing up baby, educating kids/the masses, the measurable effect of media/social media (identified alike-segment exposed to the same message reacting similarly), etc. BTW, having an 'I' is not the same as having the 'same I'. (Also, I know they exist because they've run into my car.)

    Apologies for not being clear enough. The "I" referred to is the observer inside your head that says "I am". We (I) don't know there is another such, we assume/believe that such exists inside others. It's the old "brain in bottle" issue: we accept the premise that sensory input is an expression of an exterior reality but can't actually "know" that to be true; likewise we accept the premise that other apparent entities have a similar "I" inside, but don't "know" that.

    Solipsists and psychopaths might argue that there is only one "I", and the rest are automata, but in that case I am logically entitled to restrain/eliminate them as a threat to (a) people, if I disagree with their argument, or (b) me, the only real "I", they being mere souless automata if I agree with the arguments they emit to me. ;-)

    668:

    Unfortunately, while such a view makes for good science fiction, it's fairly dubious physics. The first problem is that it assumes that division points are discrete in time - and, while such models of time have been speculated (see chronon etc.), they aren't generally accepted. If you allow continuous split points, you get something that is similar to continuous Markov theory, which is seriously difficult to work with (i.e. beyond most pure mathematicians).

    A nastier problem is that it assumes that there are only a finite (possibly enumerable) number of outcomes from division points, which is DEFINITELY way outside current quantum mechanical models. If you allow a division into a (measurable) continuum, you get an interesting model where the state of the universe is a multivariate probability distribution. I once tried looking at a much simpler model (in a computational context) and decided it was too hard for me.

    But, if you do both? Now, there be dragons! Mathematicians like Kolmogorov MIGHT be able to work with that (except he's dead!), but requires a genius of that level.

    Of course, assuming a set of universes discrete enough that you can get to an identified one (as needed for most science fiction) needs both the time and number of outcomes to come in well-separated 'units'. Assuming division only when an active observation is made is metaphysical woo of the sort that Greg Tingey rails against in other contexts.

    669:

    Assuming division only when an active observation is made is metaphysical woo of the sort that Greg Tingey rails against in other contexts.

    A re-statement of an old psycho/philo-sophical problem, in other words: "Does a tree falling in an "empty" forest make a sound when it falls?" YES IT DOES Events happen without mediation & leave traces which are discovered afterwards. If it was otherwise science & scientific discovery would be utterly futile, wouldn't they?

    R Feynman was of a similar opinion, IIRC.

    670:

    Many worlds, with infinite local dimensions... Um, right. That means that you can't predict anything, or adequately describe even, if there are an infinite number of parameters.

    On the other hand, the central description of General Semantics is the structural differential, as I mentioned in an earlier post in this thread. Only some of its "infinite" characteristics are observable... and we can work based on those.

    There's also the question of whether they're actually infinite, or just very large. I mean, if we go down to the state of all the quarks in the particles that make up X, it's amazingly large... but not, actually, infinite.

    I'd say infinite is a throw-up-your-hands cop-out.

    671:

    It only makes "sound" if there's something to hear it. Otherwise, it makes vibrations.

    672:

    I probably chose a poor example, because the issue really isn't primitive vs. sophisticated, but whether consciousness evolved out of a hypothetical facility for deciding which branch of an organic brain's capability for "speculative execution" should be followed. Eventually a less primitive hominid needed to make a decision about whether a particular piece of flint made a better knife vs. a better spearhead - and that didn't take place at a dinosaur's level of consciousness. Or fifty thousand years later, "Which programming language should I use for dealing with these network issues... C? C++? Perl?" Does something which resembles "speculative execution" play a part? Or more likely, speculative execution plus some kind of mental simulation of the consequences of each branch, (in which case "I" may have evolved out of the "locator" for the self in an organic simulation - but now I'm getting waaaaay too speculative.)

    673:

    No, infinities are not cop-outs, but they often complicate things in ways that even mathematicians don't expect, though it can often be cleaner to work with countably infinite entities than arbitrarily bounded ones. For example, vector spaces with a countably infinite basis set aren't too bad (if a little tricky), but uncountably infinite ones are definitely hairy. You've played Zork - now go and play with Zorn! See, for example:

    http://www.math.lsa.umich.edu/~kesmith/infinite.pdf

    674:

    "it's worth noting that chimps actually have better memories than we do."

    In what way "better"? It can't be in long-term retentiveness because humans routinely remember things that happened long enough ago for a chimp to be dead twice over; it can't be in detail because you can assess the detail of a human's memory to so much finer a degree than you can a chimp's. I can't readily come up with any meaning of "better" that both makes sense and is sensibly measurable.

    675:

    In computerized tests of memory (as in those computer matching games we all hate), chimps reportedly routinely do better than do the grad students showing them how to do it. According to Frans de Waals (a primatologist who worked with apes for his entire career), chimps have better memories than do humans. Humans are better at planning and problem solving. They're not more primitive, they're different.

    As for lifespans, chimps in the wild live into their late 40s, about the same as for human peasants, and the reason is probably the same: drinking dirty water and dealing with infections the hard way. The oldest known chimp was "Little Mama" at Florida's Lion Country Safari, who was in her late 70s when she died.

    The guys who study public sanitation and clean water supplies make a pretty good case for sanitation being a bigger cause of the rapid advances in human age in modern countries than medical treatment. Although nobody has mentioned it, when you look at the great apes, they tend to live decades longer in zoos than they do in the wild, and I suspect there's really very little difference in lifespan between apes and humans. The only reason we got this impression is because First World zoologists don't normally have the public health chops to notice the difference sanitation makes, and then go to assume that apes are primitive for only living into their late forties on average (like well, people with brown skins in places hit hard by colonialism, but we won't mention that).

    676:

    So, at playing Simon, basically?

    Interesting about lifespans; I had it in my head that captive chimps didn't usually live much beyond 30, and less in the wild. I also thought, more vaguely, that humans had an unusually long lifespan in large part because it also takes them an unusually long time and an unusual amount of effort to raise kids, so the adults need to be likely to stick around longer to do it for the reproduction to be successful, whereas chimps grow up much quicker. (I understand that elephants are kind of similar - needing to be long-lived to be around to raise their slow-developing kids, and backing that up by relatives getting involved to help.)

    677:

    assume that apes are primitive for only living into their late forties on average (like well, people with brown skins in places hit hard by colonialism, but we won't mention that).

    Colonialism only seems to be relevant in that we don't have great records of non-colonialised peoples. When I was at school we were taught that local Maori typically died in their 40's or 50's after grinding their teeth down on the sand in shellfish. But Maori weren't your "typical savages" in many ways. They have records of people living well into their 80's, but I don't have good records to hand (google threw up Evidence suggests that Māori life expectancy at the time of Captain James Cook’s visits to New Zealand (between 1769 and 1777) was about similar to some of the most privileged 18th-century societies. Māori may have had a life expectancy at birth of about 30. After European contact, however, there was a major decline in Māori life expectancy. By 1891 the estimated life expectancy of Māori men was 25 and that of women was just 23. which does rather put it in perspective. Later Wikipedia suggests Māori always had a high birth rate; that was neutralised by a high death rate until modern public health measures became effective in the 20th century when tuberculosis deaths and infant mortality declined sharply. Life expectancy grew from 49 years in 1926 to 60 years in 1961 which reinforces the basic public health stuff rather than insulin and heart transplants.

    Oh, I watched a fascinating "British history" take on a battle from the Maori wars, where the Maori taught the British some lessons in what would later be called trench warfare. And the utility of killing enemy officers, especially the ones who like to stand up and draw attention to themselves :) I was taught a subtly different version of the battle but the facts are compatible :)

    678:

    Just to recap, Heteromeles said (@652):

    "I agree that we're seemingly stuck with a many-worlds universe of infinite local dimensions..."

    To which I added (@664):

    "I see this more as a directed acyclic graph with unimaginably many but nonetheless a finite number of nodes."

    I suppose I wasn't clear on whether a node is a universe, a dimension, or just a parameter or event upon which the universes bifurcate - and that's because it isn't clear to me either. I'm not even clear about the need for events to be discrete (as per EC @667) because all I'm really suggesting with a graph is a way to describe relationships - at least the ones we can know or understand based on whatever rules govern the splitting.

    I don't know about General Semantics, but ontologies, which are one way to supply a semantic binding to models, can also be represented by such a graph.

    679:

    I once bought a book by de Waals based on your recommendation here, Are we smart enough to know how smart animals are. It's quite a good book too and relevant to this discussion.

    In the meantime we (my wife and I) are re-assessing our relationship with the (introduced, invasive) common (or Indian) myna. There are several around our place, they compete in the same niche as the (native) noisy miner and (native) magpie lark (aka peewee or peewit). All three species are surprisingly clever, but the mynas seem to be pushing the envelope. Possibly signs of a culture more developed in the area of taking advantage of human settlement.

    680:

    Just to clarify, my point is that graphs are definable ONLY for discrete sets; if the multiverse is continuous, you need a different (and much less intuitive) structure.

    681:

    "The guys who study public sanitation and clean water supplies make a pretty good case for sanitation being a bigger cause of the rapid advances in human age in modern countries than medical treatment."

    My understanding is that case has been proved beyond reasonable doubt.

    Moz is correct about colonialism; even as far as simple life expectancy goes, its effects varied immensely. I doubt there was much change in India, for example, and it increased rapidly in several parts of Africa, not least because of the suppression of tribal warfare.

    682:

    And of course infant mortality really lowers the "average life expectancy" without revealing how likely you are to live after surviving childhood.

    Is there another commonly-used measure that captures how long adults are likely to live? An age-at-death histogram would do it, but I’m wondering if some clever statistician has a measure that doesn't get skewed by infant mortality.

    683:

    Several of them, but they are all arbitrary, except for one (median age at death); in almost all circumstances, the median is the most robust measure of an average, but the usual approach is to exclude all deaths below the age of 5, 15 or whatever.

    684:

    So, 600+ posts into the discussion, and no one mentioned the hypothesis which binds consciousness, quantum physics and simulated universe in one tidy knot? I refer, of course, to this funny piece by Scott Aaronson (Shtetl optimized):

    https://www.scottaaronson.com/blog/?p=1951

    I like it a lot, and not just because it implies that this universe is not simulated, is not of many worlds, and that consciousness can't be identically copied or abstracted.

    Also, really happy to see CD show up with her usual Festival smacks into Austro-Hungarian engineers. Always a delightful puzzle. Hearts.

    685:

    Not unrelated to breaking the future is breaking the present. It seems that two-thirds of an entire species of antelope [200,000 of them] dropped dead within a few days of each other. TL;DR reason: it got too hot and humid for them, and one of their ordinary Dr. Jekyll gut bacteria morphed into a deadly Mr. Hyde.

    I wonder how many other species have this kind of 'kill switch' sleeping within them?

    686:

    Replying to a lot of people but nobody in particular.

    A good classical visualisation for the branching process in Many-Worlds is the delta of a big river, like the Danube. On the large scale, you see that the river separates into pretty well-defined branches, which in their turn separate into more rivulets, and so on, forming the familiar tree graph.

    But as you examine more closely, you see that some of the rivulets merge with each other again, so the river is only mostly branching. Also, when you try to find out precisely where do the branches separate, you see that the water just gets muddier and muddier until it is actually just mud, not water, so there is not fundamental truth about where does the branching happens. And even the soil that looks like it's clearly between branches is actually mostly swampland with some lakes around - there is still a lot of water there.

    If you zoom in even more the very concept of river breaks down, as what you actually have is a gargantuan amount of water molecules following insanely complicated trajectories. Even if could somehow access all this information, we would need to drastically coarse grain it in order to get something our brains can deal with.

    Many-Worlds is similar. If you think of a single particle, it is in a superposition of (uncountable) infinitely many positions, analogous to the water molecules. But these individual positions are not the worlds in Many-Worlds: the crucial characteristic of the worlds is that they evolve independently of each other, and the individual positions interfere with each other all the time. To get to the level of worlds, we need to zoom out and consider bunches of positions such that the positions within a bunch interfere with each other, but do not interfere with positions from another bunch. These bunches are the analogues of the branches of a river.

    As in the case of the river, this bunching out is not perfect: there are positions in the swampland between branches, and there is always a little bit of interference between different bunches.

    Now about the world-splitting process: it happens when some of the positions interact with another particle, in such a way that they become entangled and trigger a chain reaction of interaction and entanglement that amplifies the quantum fluctuation to the macroscopic level. This process is known as decoherence, and happens all the time, several times per second. We don't care about the overwhelming majority of such events: they are about stuff like a grain of dust in a spatial superposition getting decohered by its interaction with the air molecules in the room.

    About the crucial issue of how many worlds are created in such a splitting, I'm afraid I don't really know. On one hand, the number is not really well-defined, for the same reason the number of branches in a river delta isn't, but on the other hand we can in practice count really well the number of branches in a river delta, with only a few corner cases being unsatisfactory. So I guess one could define a respectable way of counting worlds. I expect the number in any case to be finite, but without any fundamental upper bound, as in the example of the omnidirectional antenna I gave above.

    687:

    You've just reminded me of a very important paper by the late B Mandelbrot: "How long is the coastline of Britain?" Where he first publicly introduced the idea of "fracticality" to coin a word, & the "impossibility" of making truly accurate measurements, even though, at a large scale the picture is classical & smooth.

    688:

    From that talk/paper: what exactly does a computational process have to do to qualify as “conscious”? Yeah, that could be the really important question, couldn't it? I note that he appears to agree with Evrerett & I Banks about the definition of "Mind" - interesting.

    689:

    The comments on that article are also worth reading. Scott Aaronson attracts heavyweights....one would almost say, it attracts...Boltzmanlike brains :)

    690:

    Thanks for that link. Rambling, a bit, but fun. (Can't say I agree though for parts of it.) The talk slides he links (here) look like a serious rabbit hole. Skimmed a few; I enjoyed Charles Bennett's decent attempts at amusing artwork. (e.g. slides 6,8,17. I am shallow.) (The videos appear to have vanished?)

    691:

    Got any experimental external evidence, rather than subjective "experiences"? Well, sort of. (1) There have been eeg studies of meditative states since there were eegs, and there is a serious community that uses feedback from such devices in training. Plus some consumer devices. (I toyed for a bit with a 1-electrode (well, 2) toy device. It was sufficient to do a little biofeedback training.) (2) There are a pile of fMRI studies. Search google scholar with keywords fmri meditation. Many of them (as usual with fMRI studies) are probably of dubious merit, but they do exist. Here's a random recent one, widely cited: The neuroscience of mindfulness meditation (8 March 2015, researchgate link for a Nature Reviews Neuroscience paper.)

    692:

    Going back to the thread root article, here is a link on how exactly does life look when capitalism meets Big Brother in modern China:

    https://www.buzzfeed.com/meghara/the-police-state-of-the-future-is-already-here?utm_term=.hsA1g2pVd#.abb5q2dJb

    There is a similar WSJ article but I don't want to spam with links too much.

    "James Leibold, an associate professor at La Trobe University in Australia who is conducting research on security contractors in Xinjiang, said the broader security industry, including both physical policing and surveillance, is now the biggest employer of people in the region."

    693:

    Very worrying, in more ways than one. The sheer stupidity of the CHinese & other governements is astounding. One: the repression is so ubiquitous that it invites/provokes resistance, where none existed before ... And this is an old lesson that seems to have been forgotten through arrogance & stupidity. Two: The surveillance syatem is already the argest employer - just like the Stasi, in other words, or the USSR ... which means that, sooner or later, it will collapse under its own weight, bringing, if not everything, a large part of the rest of the syatem down with it. Again, forgotten through arrogance & stupidity.

    Why do people never learn?

    [ This is the mirror-image of "Marxist" revolutionaries & unbelievably stupid so-called idealists like momentum, who claim that "Our revolution will be different!" - oh yeah. ]

    694:

    Actually, I mentioned this idea en passant in 463. The problem with Scott Aaronson's conjecture is that it is explicitly wishful thinking. He proposes it just because it allows one to evade the disturbing philosophical questions, not because he has any reason to believe it is true from physics or biology.

    695:

    Greg, the Chinese state is not communist (or Marxist)except in name, and the whole article was about surveillance under capitalism - private companies serving the state interests, for profit. I see no similarities with USSR or Stasi, and I lived on the other side of the Iron Curtain. I see no imminent collapse, just more of the capitalist panopticon dystopia.

    696:

    Greg, the Chinese state is not communist (or Marxist) except in its name, and the whole article was about surveillance under capitalism - private companies serving the state interests, for profit. I see no similarities with USSR or Stasi, and I lived on the other side of the Iron Curtain. I see no imminent collapse, just more of the capitalist panopticon dystopia.

    697:

    Scott's conjecture is explicitly stated as an idea with no concrete evidence. It is elegant and neatly tides up a lot of stuff, from encrypted lookup tables to Boltzman brains to quantum torture, so just by its elegance, it is worth mentioning and discussing it. I believe it is adding more to the discussion then the preceding 600* posts. Your comments on 463...I am sorry, but I don't see you expressing Scott's ideas of what makes computing process conscious. The only thing I see there is agreeing that minds are probably not a quantum phenomena. All that being said, I do appreciate reading your responses :)

    698:

    Did I use the word "communist" in relation to the current Chinese guvmint? I did not. OTOH the similarity to the Stasi in spending vast amounts of time & money covering & "suppressing" a revolt that may not have been there before, but is now is, is incredibly stupid, for reasons goven. You can't keep the lid on for ever, as Metternich found out. And, it's incredibly wasteful & resource ( time ) hungry.

    699:

    Well since you do appreciate my responses I'll write you one ;p

    Scott's idea, in a nutshell, is that what makes human brains conscious is that they have unknown quantum states in them, and the cognition process amplifies their quantum uncertainty to the macroscopic level, allowing us to display unpredictable behaviour. It is crucial to his idea that the quantum states are actually unknown, so that our unpredictability is not merely probabilistic (i.e. if the quantum states of my brain were known I could perfectly predict probabilities for the behaviours), but displays what is known as Knightian uncertainty, where not even the probabilities of the events are known.

    (So this falls under the category of "the precise quantum state of the brain is essential for your identity and cognition")

    That the quantum states in our brain are unknown, and can't be known is uncontroversial. What I find objectionable is the assumption that our cognition process uses their quantum uncertainty in a way more interesting than a classical computer uses a random number generator. There is no reason to think why this should be the case, and there is not even a conjectured protocol or algorithm that would be more powerful with Knightian uncertainty than with mere probabilities.

    700:

    Penrosian delusions must be catching :-)

    With regard to your last clause, actually, there is. I can write an algorithm to check if an arbitrary program will terminate, which will fail for an unknown proportion of programs. I believe that there is no way to calculate that proportion, either! But, just add the word 'useful' before 'conjectured' and that's the way that I would vote, too :-)

    701:

    I like the analogy. In particular, it has the properties that you can't necessarily decide if one stream is a tributary of another, move away and get back exactly where you were before, and so on.

    702:

    I wouldn't be surprised if there were quantum effects in neurons.

    However...

    What I'm thinking of is something more analogous to the known superposition effects in mitochondria and plastids. In electron transport chains (in mitochondria and chloroplasts), the electrons apparently use superposition to "find" the most efficient of several routes through a chain of organic molecules. Additionally, there are quantum superposition effects in the way the antenna complexes around the photosystem(s) in chloroplasts capture photons and direct them to the chlorophyll.

    I wouldn't be surprised if there was some sort of electron superposition in the way that neurons process signals. It would be extremely cool, but it wouldn't involve qubits.

    And if (horror of horrors) it turns out the Copenhagen Interpretation is physically correct and many worlds is not, I also wouldn't be surprised if neurons use interactions with whatever "observer" is in processing data, either.

    The reason I make these surmises is evolution, which in many ways is simply entropy favoring more efficient dissipative structures through a random walk over billions of years. If observer effects make for more efficient entropy maximization, they'll get caught up in the mix. Ditto all other quantum-scale effects that are possible in an aqueous environment at STP. The stuff with electron transport chains and photosynthesis is billions of years old, after all.

    If it turns out that neurons depend on "observer" (or observation effects) to process data, that would mean that consciousness is boringly ubiquitous. I'm not sure what that would do to to the beliefs of people who prize consciousness above all else, but it would put an interesting spin on mysticism. "Observer" would resemble an omniscient deity, but not in a way that most people would necessarily want to relate to "it."

    703:

    To be fair, Aaronson's proposal works with regular physics, whereas Penrose's require science fiction that would make even Charlie blush.

    I don't really understand your second paragraph, though. How could Knightian uncertainty help with the halting problem? But your larger point is correct, anyway, as a source with Knightian uncertainty is better at being less predictable than a source with with mere probabilities. The key point is usefulness.

    704:

    Given that Geckoes climb completely smooth surfaces & can walk across dry ceilings ... because their feet use the Weak Atomic/ or Van der Vaals' (?) Force to do so. ( See HERE ) Then maybe, just maybe your suggestion(s) have merit. However .. "Copehagen Interpretation" - NOOOO - please say it ain't so!

    705:

    In a trivial sense, every effect is a quantum effect, because everything is made of atoms that obey the Schrödinger equation. Hence the question is not really whether the effect is "truly" quantum, but whether a classical model of it is good enough.

    In your example of electrons using quantum effects to find the most efficient transport route, we could simply cheat by calculating the most efficient way by brute force and telling the simulated electrons to follow this way.

    Even the mere stability of matter is a purely quantum effect; remember that this question is what motivated the quantum model of the atom in the first place. We get around it in classical models by simply assuming that there is matter and that it is stable, so it is easy to get around. Superpositions are also easy to deal with, if there is no interference, as in this case they reduce to mere randomness (that is, if some particle is in a superposition of being here and there, this is equivalent to say that it is either here or there with some probability).

    Now interference, this is the tough nut to crack. There are lots of tricks to deal with it, but in general we just have to use a quantum model, with all that the difficult that this entails.

    So the question is not really if there are quantum effects in neurons, but whether we need to simulate large-scale interference effects to simulate the behaviour of neurons.

    706:

    Since I'm trying to keep these to short blog entries, why don't you do a bit of googling on electron transport chain and quantum effects and photosynthesis and quantum effects, and get back to me? You might be surprised and actually learn something useful.

    707:

    Well, yes; neurons work by chemistry, chemistry works by quantum effects... But then, transistors also work by quantum effects, yet you don't get quantum computation just "dropping out of it".

    Similarly, I don't understand what you're saying is "special" about "the electrons apparently use superposition to "find" the most efficient of several routes through a chain of organic molecules." I mean, yes, they do, but that's how things like that do work when you look at them at that level. The same goes for the photon steering in chloroplasts: the standard "Young's fringes" physics experiment demonstrates superposition effects routing photons.

    It's easy to forget that although quantum effects may seem weird and strange when you consider them in isolation, they don't correspond to things behaving in a magical fashion; they are instead a very low-level description of things behaving in a normal and everyday fashion. Getting quantum effects to not result in normal and everyday consequences requires some very careful and complex experimental design, and the setups are very sensitive to external disturbances so you need highly controlled conditions to get a result.

    The idea of a quantum basis for consciousness seems superficially attractive for a basically mystical interpretation, and is "useful" for things like providing a quasi-scientific rationale for religious concepts of souls or having fun thoughts about when you're stoned. But I don't see any use in taking it seriously, and AFAIK there is not a scrap of actual evidence supporting it. The justification seems to boil down to "consciousness is a weird thing which is hard to understand; quantum physics is a weird thing which is hard to understand; therefore they must be somehow logically connected in a useful way". I prefer to consider consciousness as an emergent property of a complex collection of classical processing nodes; this is somewhat subject to the same criticism, but appears to be closer to reality, and it has the advantages both of parsimony and of some (admittedly vague and distant) support in the experimental demonstrability of complex ordered emergent effects arising from selection on random processes.

    There was an experiment I read about some time in the 90s (which I think has been mentioned on here before) where a "genetic algorithm" was used to find a configuration for an FPGA that would produce regular pulses at a particular frequency. The circuit diagram of the configuration it came up with was nothing like what an engineer would produce; it looked like a random mess that you wouldn't expect to do anything useful. It depended on poorly-characterised or uncharacterised properties like gate propagation delays, and what happens when two conflicting logic levels fight each other, and I think even side-channel interactions with supposedly inactive parts of the chip. But it did produce regular pulses at the desired frequency.

    It wasn't beyond the bounds of practicality to actually work out how it did this; the circuit diagram fitted legibly into half a page of a magazine, and it would have been possible, by tedious analysis plus measurement of the various undocumented characteristics of the chip that it made use of, to explain its behaviour. But it would have been bloody difficult and long-winded. I can't remember if the experimenters went so far as to do it; there wouldn't really have been much reason to.

    There is a much simpler analogue circuit, using only one or two active devices - they might even have been valves in the original version - which simulates the behaviour of a primitive animal such as a simple insect. It assumes the existence of the "mechanical" level of I/O processing - "I smell food" is something like a contact, rather than the preprocessed output of an array of chemoreceptors, etc - but given this simplification, it is able to mimic both the basic behaviour and the style of movement of something like a fly using what looks like a ridiculously trivial processor.

    Both of these devices, being electronic circuits, operate by dependence on quantum effects, but neither of them do quantum magic; they obey the same, classical, rules of behaviour as any other circuit, but just apply them in an unusual (to an electronics engineer) way.

    It seems to me far more likely that the mental processes of creatures more complex than a fly are simply more elaborate instances of the same kind of phenomenon than something operating on fundamentally different principles.

    708:

    Ha! Seems I took long enough to write my post for you to say much the same thing rather more promptly :)

    709:

    @Araujo: It seems that we have very different ideas of Scott's idea in a nut..shell (pun intended). He explicitly states that some sort of Knightian uncertainty might be needed for "free will" and really doesn't claim at all that it is needed for conscious computation. He explicitly rejects Penrosian quantum effects on the brain. The key difference in what makes something conscious is "digital abstraction layer", a shorthand for complete reversibility of the mind state. His idea allows conscious AI in such a way that you can't really do "reverse its mind state to before we torture-query it" which makes all sorts of AI ethics much easier.

    @elderly cynic - really, just go and read the link. You can also have loads of fun reading this related (but not exactly the same subject) paper: https://www.scottaaronson.com/papers/giqtm3.pdf

    710:

    "complete reversibility of the mind state."

    As I understand the mechanisms of neurons there is not the slightest chance of this being possible. Some aspects of DNA processing are reversible, but synapses are not.

    711:

    ..which is more or less Scott's point, though he doesn't talk about neurons - any sufficiently complex computational process that is not completely reversible would be OK (so, you can have AI in classical computers, you just can't have completely transparent and conscious AI in classical computers). AFAIK, of course.

    712:

    I think you misinterpreted what I wrote.

    Mateus is more correct. What I'm pointing to here is the problem of how a blob of cells running on around 100 watts can process data in ways that are imperfectly mimicked by way of large data centers running on the electricity needed to power a small town. I'm not invoking quantum woo because it's unknown, I'm looking at other subcellular instances where nature has made some really, really efficient processes using explicitly quantum effects in systems that were originally assumed to run purely on classical chemical interactions (this is the part that Mateus got wrong, I think out of ignorance). Then I'm wondering whether similar things happen inside neurons.

    If there's quantum behavior in neurons, I sincerely doubt it's in turning any parts of neurons into quantum nanocomputers running on atomic qubits, although that would be cool (A self-assembling quantum computer in an electrolyte solution at STP?). Rather, I'd guess that there's something non-classical involved in the way electrons move within neurons that makes neurons unexpectedly efficient at processing data, just as a mitochondrion is unexpectedly efficient at moving electrons as part of respiration or a photosystem antenna within a chloroplast is unexpectedly good at directing photons into places where they can power chemical reactions.

    The "woo" part is that if the Copenhagen interpretation is correct and many worlds is not (and I have no idea how to test this), you've got observer effects causing waveform collapse inside each neuron as a fundamental part of how they process data. If that's what turns out to be the root of consciousness, then consciousness is some sort of universal, low-level phenomenon, while your mental construct of your "self" as a "conscious" "being"* is an emergent, high-order phenomenon that only exists because it helps you make more babies than you would if you spent your life dwelling on the quantum neuron interaction level that advanced meditators reach.** In this highly hypothetical scenario, coupling the these two very different phenomena is a mistake, and talking about uploading your consciousness to a computer becomes nonsensical.

    *If you believe the Buddhist mind model, self is an emergent illusion, as is being and arguably consciousness in the "I think, therefore I am" sense. Hence the quotes.

    **Note that most advanced meditators are childless. There's a whole other ball of worms about how fitness in evolutionary terms relates to intelligence and learning, but they are not highly correlated, to say the least.

    713:

    “And if (horror of horrors) it turns out the Copenhagen Interpretation is physically correct and many worlds is not.” Is there any conceivable experiment that could prove such a result? It doesn’t have to to be a benchtop experiment - I’ll let you have the resources of a Kardashev Type III civilization and a million years to collect results.

    714:

    I disagree Greg.

    Here are the strengths of the Chinese system

  • It functions similar to the military-industrial complex in the US. That has lasted close to 70 years so far. There are regions in the US (and probably states) where the MIC is the largest employer

  • If Xingjiang were a separate country financing this on its own, you'd be right. Even if we assume that this panopticon is spread out to other Autonomous Regions like Tibet and Inner Mongolia, it only less than 10 percent of China's population.

  • Xingjiang has a large settler population. The last census to measure ethnic groups was taken in 2000. At that time, 43.6 percent of the territory was Uighur compared to 40 percent Han Chinese. Most of China's urbanization happened after 2000. Unlike Stasi example, the settler population have a huge stake in making sure it's successful.

  • https://en.wikipedia.org/wiki/Xinjiang#Demographics

  • Xingjiang functions as the place where prototypes of surveillance technologies are tested and fleshed out. It is my understanding that they're not deployed to the rest of the country until they're more mature, and thus cheaper.

  • Just like the military and NASA in the US, the surveillance industry creates its own spinoff technologies which help the economy. Even if they're not taxed (which I doubt), they do contribute to economic growth which is in turn used to pay for the system.

  • I wouldn't be surprised in private companies like Alibaba and Baidu aren't using the data collected by the government to train their own systems.

  • Let's not ignore the massive investment the region is experiencing due to the "One-Belt-One-Road initiative"

  • https://www.cnbc.com/2017/11/23/xian-china-is-a-growing-hub-for-grain-trade-with-kazakhstan.html

    https://www.nytimes.com/2018/01/08/world/asia/kazakhstan-china-border.html

    https://www.nytimes.com/2018/01/01/world/asia/china-kazakhstan-silk-road.html

  • I wouldn't be surprised if Kazakhstan isn't quietly cooperating here to prop up their own dictatorship.
  • 715:

    Heyermoles The fact that analogue computers can outperform digital ones dramatically is very well understood on CS and a surprise to no one

    716:

    "the problem of how a blob of cells running on around 100 watts can process data in ways that are imperfectly mimicked by way of large data centers running on the electricity needed to power a small town."

    I think the important word there is "imperfectly". The mechanics of computation in the two different types of system are so different that comparison of energy consumption really doesn't mean a lot. (And it comes out the same the other way round (ie. cells mimicking silicon), for the same reason - for instance my PC can do thousands of FFTs of double-precision complex data using less energy than it takes to power me long enough to hit the "return" key and start it off, never mind long enough for me to do the calculations by hand.)

    If my estimates are anywhere near the mark, a bit flip in a DRAM cell uses something over 200keV. For comparison, the bond energy of carbon monoxide (the most energetic bond known) is about 10eV. Biochemical reaction bond energies are considerably less than this. It is therefore obvious that chemical computation on the molecular scale can't possibly be anything like as extravagant with energy as digital computers are.

    The other side of this is that molecular computation is enormously more liable to bit errors (in the informational sense, rather than necessarily actual identifiable bits) caused by thermal noise. To flip a bit in DRAM requires something like an alpha decay event to deposit enough energy in a small enough volume, but "bit errors" in biological systems are as much of a "normal thing" as Brownian motion.

    Digital computers depend utterly on the negligibly small bit error rates which are made possible by their high energies of state transition. Molecular computation on the other hand must necessarily use a very different computational model because it is imperative that it simply mustn't give a shit about huge bit error rates.

    There are at least two architectures used to achieve this. The computational operation of DNA transcription is one, which is very different again and uses reversibility to achieve resilience. Neuronal computation, which is the architecture we're talking about, uses such things as averaging over time and over a huge (essentially continuous) set of states, in computational units which are capable of much more than simple two-input Boolean operations, and (in more complex organisms) massive parallelism. By such means it can produce what look to people historically used to understanding things by basically Boolean principles, and in these times also very accustomed to seeing computation performed artificially by stacks of high-speed Boolean units, to be results of a complexity totally out of proportion to the complexity of the processing.

    But this apparent disproportionality is more of a cultural bias than a real thing. Artificial computational devices can be made to produce similarly complex results from apparently ridiculously inadequate processing power - one example being the experiment I mentioned of the one-valve fly simulator. Then there are digital simulations of neurons, analogue electronic neuron devices, and suchlike partial reimplementations, which show that the processes of the neuronal systems at least of creatures which have only a handful of neurons and so are amenable to detailed analysis can be reproduced on arbitrary computational platforms (generally grossly inefficiently, but that doesn't matter). These platforms do not exhibit any peculiar quantum behaviour - though they may well be examples of the apparent paradox of ordered output from a chaotic system. (This suggests to me that if one is developing a philosophy of consciousness which requires a source of utter unpredictability, chaotic behaviour may well be the place to look for it.)

    (Another instance of a comparable cultural-bias-type effect is the oft-quoted fallacy that the brain "solves differential equations" at lightning speed to catch a ball, etc. Does it buggery, any more than an analogue PID controller made of a couple of op-amps does. "Solving differential equations" is just a method for understanding on an intellectual level the task the brain or the controller is trying to do, not a description of what they're actually doing.)

    So as far as I can make out, the low power consumption is not something which has been arrived at by evolution in the course of making neuronal computation possible, but rather a very fundamental consequence of chemical systems which use tractably low-energy reactions evolving a distinct and explicit computational function from their inherent computational nature. (Hello again software vs. hardware as a conceptual abstraction that does not have to map to separately identifiable physical entities to be useful.) Thermodynamic constraints mean that neither electronic-style high-energy state transitions nor computational methods that depend on the stability of such states can begin to evolve in the first place. What does evolve is a low-energy architecture which is incredibly difficult to understand, but can nevertheless be simulated (in a very rudimentary form due to practical considerations) by arbitrary purely classical methods.

    717:

    Is it even a meaningful question? They're interpretations of the theory, not the theory itself; they're intellectual tools intended to make this weird thing easier to understand. They all represent the theory validly; there's no "correct" interpretation, there's just the interpretation that best makes a given situation understandable for a given person.

    718:

    I just read this comment. It wouldn't work as well as you think.

    Right now, the CEO gets more compensation than the board members. However, check out a map of the boards of major companies. You'll notice that several people sit on multiple boards. Let's take Aamazon as an example. If Amazon is still split into 50 entities, each entity would pay their CEO's a fraction of what they pay Bezos now. However, the pay of board member Bezos would increase. I could easily see Bezos hiring "representatives" to coordinate the entities.

    Plus, you're trying to fix a problem that's been fixing itself.

    http://www.bbc.com/news/business-16611040 https://priceonomics.com/why-are-so-many-of-the-worlds-oldest-businesses-in/ https://medium.com/tedx-experience/welcome-to-the-prune-club-learning-from-billion-dollar-innovation-laggards-1281e826e501

    I'm not going to search for the story now, but I remember reading that corporations have much longer lifespans in Europe than in the US. In other words, the problem is Europe, not the US. Perhaps we should look what shelters European companies from competition?

    719:

    If we're going to get into the "woo" I'll speculate a little more about my own ideas. There's the possibility that the human brain can engage in "speculative execution" just like a computer does, but in a more sophisticated way; that is, that in visiting the watering hole, we have several possible responses laid in and ready to operate... for dealing with a crocodile, dealing with another human violating our territory, etc.

    Go a step further. We also know that humans craft their "speculative executions" by thinking about the future and envisioning what might happen. In order to do this successfully, we must be able to know, in our simulation, which of the variables - crocodile, other human, etc., represents ourselves. (A hominid who can't tell the difference between crocodile and him/herself when simulating what might happen at the watering hole isn't going to live long, right?)

    So there's a lot of evolutionary pressure to evolve a sense of self. But whether that sense of self really exists (Your Immortal Soul) or is simply a variable in the simulation that carries a lot of value (or contains a lot of important code) is difficult to parse. (I choose to believe I have a soul, but can't prove it.)

    However, if you were to build a robot which could simulate its own future, and you built the necessary code to identify self vs. other, then you'd eventually evolve a robot which could use the word "I" just as we do!

    720:

    I think you missed the point. I'm not disagreeing, incidentally, with any of the points you have raised.. But, for surveillance to work, you have to pay people to watch the collected data - & that's the bit that falls over under its own overload, at some point or another.

    There are signs, already, that the miitary/prison/industrial complex in the USA is starting to eat itself, in terms of resources ... but it will still take quite a few years before it falls down of its own accord. The diversion of human-power resources into surveillance removes the same man/woman-power from real development & industrial & social advances.

    721:

    Chimpanzees in captivity were (and probably still are) commonly euthanised when they got to be older adults as they are dangerous and difficult to handle fully-grown as well as being rather self-willed and subject to rages. The cute periods of childhood and adolescence are over and the attraction for zoo visitors of seeing curmudgeonly violent pseudo-humans screaming and throwing poo has decreased. The same problems apply to chimps as lab animals, the older fully-grown chimps are not good test subjects in the main.

    They reproduce without problems in captivity so there's no shortage of new cute babies and tractable youngsters to feed the pipelines.

    722:

    He focusses on the claim that consciousness only exist when the thinking is an irreversible process, but this is only possible if the thinking is powered by quantum states and there is a massive amount of decoherence going on with the information processing. There is no other way to get irreversibility.

    723:

    To people wondering about how to test the Copenhagen versus Many-Worlds interpretation: there is the famous Wigner's friend gedankenexperiment.

    The setup is that you have the friend inside an isolated laboratory, and Wigner outside. The friend then measures some particle in a superposition. According to the Many-Worlds interpretation this creates a world-splitting, whereas according to the Copenhagen interpretation a the friend must have collapsed the state.

    But now, since the laboratory is perfectly isolated, this world-splitting does not propagate outside, and Wigner can interfere both worlds. According to the Many-Worlds interpretation he will see interference, and according to the Copenhagen one he won't.

    Unfortunately I couldn't find a good reference about it, so I'll just suck my own dick and link to what my blog post about it. It gets a bit technical.

    724:

    There is no other way to get irreversibility. Rubbish Second Law of Thermodynamics. Once it's done, you can't go back .....

    725:

    “Is it even a meaningful question?” That’s what I’m trying to figure out.

    If QM “interpretations” are unfalsifiable by any conceivable experiment then it’s a category error to refer to them as being “right” or “wrong” and there’s no point debating them. Their purpose is to make people feel better by making QM feel intuitive, and preferring one over another is purely a matter of taste.

    If that’s the case then my preferred interpretation of QM is that our monkey brains are designed to deal with macroscopic problems like getting a log raft through breakers [1] and throwing a rock at an antelope’s head, to which the “waves” and “particles” of QM are misleadingly familiar-feeling analogies at best and QM is never going to feel right, so just shut up and calculate. If thinking in terms of Copenhagen or Many Worlds helps you avoid mixing up or dropping terms in your calculations then they’re helpful mnemonics, but they have no deeper meaning.

    [1] The history of hominin dispersion along coastlines and to islands like Flores makes suspect that messing around with boats goes way back in our ancestry.

    726:

    If I read that blog post correctly you’re saying it would be possible for someone with sufficient resources to create an experiment that would put a (ever-improving with multiple replications) p-value on all single-world interpretations. It’s just going to be a real long time before anyone has the technical capability to create an isolated lab capable of keeping Wigner’s friend from either decohering or asphyxiating. Do I understand you correctly?

    727:

    Going back to the discussion of social media and addiction, here's a little comic strip. Click on the image.
    .
    In case it's not visible above, THIS drawing might indicate which particular social media I was referring to.

    728:

    Thank you. I think you missed my points though.

  • If the US could divert that amount of manpower for 70 years, what makes you think China can't?

  • Thanks to automation in the surveillance industry, China can do more surveillance for the same resources. China isn't doing this because they've become more paranoid; they're doing this because it's now become more affordable.

  • The settler population probably does a lot of the work "off the books/for free" for their own ideological reasons. That keeps costs down

  • 4-6. If the surveillance program pays for itself, it's an investment, not a cost.

  • The surveillance program can continue to grow if it remains a constant percent of an expanding GDP.

  • Just like you can't take the US military-industrial complex in isolation without considering the European and Australian contributions, you have to take into account the Kazakh contributions.

  • Fundamentally, your point is that the surveillance state is consuming a greater portion of the PRC's resources. My point is that, while that may be partially true, most of the growth in the surveillance state is due to an increase in automation and efficiency. That's the reason I said that Xingjiang functions as a laboratory. Only the mature technologies are introduced into the rest of China.

    729:

    Thanks for the reference. I'm trying to figure out how you have isolated, potentially superpositioned, labs interfering with each other, but I suppose that could be virtualized in a more meaningful way (as in an isolated, virtual lab observing a similarly isolated superpositioned particle, and the virtual lab state has some way of being observed).

    I suspect also that if we get something resembling a workable theory of everything with quantum gravity, it may well favor one interpretation over another. For example, it might help determine whether "infinite" (is this even a useful term?) local dimensions in probability space (per many worlds) is more or less useful for quantizing gravitational interactions over interstellar distances than some other interpretation.

    730:

    .... of hominin dispersion along coastlines and to islands like Flores makes suspect that messing around with boats goes way back in our ancestry. Aquatic Ape Theory, anybody? cough

    731:

    NICE - but I'm afraid you will have to explain further - I couldn't read the vital black-background thought bubble

    732:

    No/Yes You are (partially) correct - BUT ... sooner or later, it will/must fall over under its own weight. Simply automating/computing (etc) the operations "merely" pushes the collapse-point down the road, but it's still there, & consequentially, when ( NOT "if" ) the collapse happens, it will be that much more painful/catastrophic/joyful/liberating. "Here lies a fallen god, his fall was not a small one, we did but help build his pedestal, a narrow & a tall one"

    733:

    It's the Pope saying "Happy Easter, ladies and gents of Rome and Italy" in Italian.

    734:

    Puke. That's an excellent example of an unfalsifiable theory, sadly.

    In any case, there as a hand axe-like tool found found on Crete (https://news.nationalgeographic.com/news/2010/02/100217-crete-primitive-humans-mariners-seafarers-mediterranean-sea/. See also https://www.bu.edu/archaeology/research-fieldwork/research-centers-labs/plakias/). I don't know whether this tool demonstrates that Homo erectus made it to Crete as well as Flores, or whether it's a teshoa (https://twipa.blogspot.com/2014/08/a-knife-by-any-other-name-is-knife.html--a crudely flaked cobble made for a task and then discarded. These were made up into the 20th Century, especially by women). Still, there is that possibility that Homo erectus really was good at overwater dispersal. Unlike, say, Neanderthals...

    735:

    Yes, it will collapse eventually. Eventually technology change will render the paradigm under which its built unsustainable. But if it takes a century to collapse, then it's outside the parameters of a discussion.

    Note that I'm not convinced the US's MIC is on the verge of collapse either. It too is pivoting away from manufacturing towards surveillance as factories become more automated and the expectations of their target labor force changes. People are not going to vote for a senator that brings home the pork if they believe that the pork is beneath them. Likewise, I expect that the MIC will change its internal racial hierarchy to survive, but that's more like the Hapsburgs, the Bushes, and the British Royal Family expanding their "acceptable" marriage pool to survive as an entity.

    736:

    What are you meaning by "irreversibility", then? Irreversibility in computation is a thermodynamic concept. If you calculate a+b by "conventional" methods of arithmetic (as used by ordinary digital computers, or humans with pencil and paper), which discard the a-b aspect of the result, you cannot then recover the values of a or b. This loss of information corresponds to an increase in entropy. The mechanism of operation of neurons (AFAICT) also discards information in an equivalent way and is similarly irreversible.

    You seem to be using "reversible" and "irreversible" in the sense of individual particle interactions, though, which of course is not the same thing - the good old Second Law paradox, or whatever you like to call it, usually seen in examples like "individual collisions between gas molecules are reversible but gaseous diffusion is not"; the same applies to the operation of computational elements.

    738:

    Yes, this is what I'm saying.

    With the added twist that the Many-Worlds prediction for the experiment is deterministic, so it can be falsified in a single shot, whereas the Copenhagen prediction is not, so it can only be falsified in a statistical sense.

    739:

    This is precisely how people propose to do the test: once we have a working quantum computer, use it to simulate a lab doing the measurement, and interfere the two resulting branches. This should be possible to do in the near future.

    The problem, as you might have guessed, is that we need the extra assumption that the simulated physics faithfully represent the real physics. I've had an argument with someone saying that he wouldn't buy it precisely because of this. His position was that the quantum computer was merely simulating the Many-Worlds model of the experiment, and it was no surprise that its results would agree with the pen-and-paper calculation, because a working quantum computer must more-or-less by definition obey our equations.

    About quantum gravity helping solve the interpretational question: I find this plausible, and there is historical precedent, with special relativity settling the absolute-versus-relative space and time in Newtonian mechanics. It is worth mentioning, though, that there is not even a proposal for quantum gravity that incorporates collapse. All attempts - string theory, loop quantum gravity, and crazier stuff - are fully unitary.

    740:

    I wonder if it would be possible to do the superposition observation on a deep space probe and do something to interfere the signal it beams back as a result...

    And what does unitary mean in the sense of quantum gravity? It all assumes 3+1 dimensions at all scales?

    741:

    (and Greg 723)

    The Second Law and decoherence are intimately related.

    When a computer calculates a+b (and probably the brain as well) it does delete information. By Landauer's principle this information is rejected as heat to the environment, increasing the entropy and making the Second Law happy.

    This account is not detailed enough for decoherence, though. Exactly where was the information encoded and how was it lost? Let's simplify and say that a single bit of information was deleted, and its energy was radiated away as infrared in a NAND gate. The photons then excite other atoms around the CPU, which get hotter, and radiate the photons again in some random direction, eventually getting to the air around the CPU, which gets vented out by a cooling fan. So the information is not destroyed, it is just encoded in a hopelessly complicated away in the position and momentum of the air molecules.

    Well, this transfer of information from a single bit to a complicated collective degree of freedom is just decoherence.

    There is, however, an extra detail that is important for Aaronson's proposal: if all this mess is perfectly isolated from the rest of the world, it is merely in practice irreversible, à la Second Law, and he demands it to be in principle irreversible. In fact it is technically not decoherence at all if such isolation exists. To get in principle irreversibility you need a single photon, that encodes part of the information, to escape the computer and fly away from the Earth.

    Now it is not possible to reverse the computation without catching this photon. And since catching photons flying away from the Earth is kinda hard, people call this in principle irreversibility.

    742:

    Now I'm wondering how one might go about getting a mass that's big enough to measure a gravitatioal change on to act in act coherently in a quantum sense. I'm afraid it involves something like cooling an isotopically pure lump of uranium or thorium down to a few millikelvins and seeing how that affects what it weighs and emits. Hmmmmm.

    743:

    Doing the experiment in a deep space probe would help in isolating it, but still we would need to interfere the whole probe, not only the signal getting back to us. And that is quite hard if the probe is far away.

    It's essentially the same difficulty with building a quantum computer. We want to be able to interact with it strongly enough to precisely control its evolution, while at the same time keeping it isolated enough that it doesn't decohere with the control system.

    744:

    Unitarity doesn't have anything to do with dimensionality, you can have a 10+1 theory that is unitary (or not). A unitary theory is one that is fundamentally reversible (and linear).

    745:

    This is what everyone wants.

    Directly measuring the force of gravity that a lump of mass exerts in another is extremely difficult. AFAIK the most ambitious proposal in this direction is this paper, that wants to measure the gravitational attraction between milligram-scale gold spheres.

    Instead of measuring the force, though, one can try to measure the phase shift caused by gravity. Last year a serious proposal popped up to put neutrons in an interferometer that could measure the phase shift caused by their mutual gravitational attraction. I'm getting embarrassed by repeatedly linking to myself, but I blogged about it here.

    746:

    Remember that I'm not a physicist, so we're not using English the same way. I'm starting from a many-worlds model, where quantum interactions cause "worlds" to build up "in parallel" (in some probability dimension), until interactions cause some of them to merge.

    What happens with gravity in this model? All these many local worlds are interacting, and presumably they're emitting gravitons in some way. If those many alternate worlds interact with each other via gravitons, this looks like a recipe for a black hole: as the number of worlds piles up, the number of gravitons emitted increases exponentially, until the whole thing falls into a singularity.

    Since this is obviously not what happens, there has to be something like decoherence, collapse, or whatever (gravito-gremlins?), that winnows out almost all of the potentially emitted gravitons from each superposed state, so that masses stay constant at observable scales and quantum computers don't accidentally become micro-black holes if there are too many unresolved superpositions piling up inside them.

    747:

    Your mental model of Many-Worlds is a bit off.

    Different worlds can interfere, but it's extremely rare, and gets rarer as the world-splitting spreads through space. It's like all the air in a room spontaneously gathering in a corner. Technically it can happen, but don't count on it.

    And this interference is very different from an interaction. Interaction is about some charged or massive particle exerting force on another. Interference is about particles getting into exactly the same quantum state except for the phase, which then makes make the terms of the superposition cancel out or reinforce each other.

    Everything we know about quantum mechanics tells us that the worlds can not interact with each other. At all. Nada nada. For fundamental reasons. We can't really know if this is also the case for gravitational interaction, but everybody suspects it is.

    748:

    Worlds cannot interact with each other, except that interference signals cause subsequent interactions on both worldlines? Is that what I'm reading here? Hoo boy.

    At this point, I start thinking about Hawking Radiation and very small black holes, and wondering what theory would predict for the gravity waves caused by evaporation canceling or reinforcing each other.

    749:

    Everything we know about quantum mechanics tells us that the worlds can not interact with each other. At all. Nada nada. EXCEPT At the point of divergence, of course, & that supposedly leaves irreversible changes behind. Sorry, but I'm beginning to smell mystical woo here, too ...

    Somewhere, "we" are missing something - as should have been obvious since the "renormalisation" problem surfaced ( Though that was sat on & suppressed in discussion for many years ) - the incompatibility between Gen Relativity & QM ... etc, yada, yada.

    And we don't have any actually testable ideas ( as far as I can see ) either. We need an outsider's view, a patent-clerk working in an obscure office somewhere, maybe. Though how one such could get her ideas published, these days is also a n other question.

    750:

    There is nothing mystical about this, it comes straight out of the equations.

    And the reason for that is just statistical: two quantum states can only interfere if they become exactly identical. Easy to do with a single photon or a single atom. But a whole world, a massively complex entangled state of 10^30 particles that you have no control over? It's not going to happen.

    751:

    Nonsense. The various forms are no more nor less falsifiable than the savanna hypothesis, which is STILL being accepted despite having a lot of strong evidence against it. The original aquatic ape theory was a bit cuckoo, though not much more than the savanna theory, but I have corresponded with Elaine Morgan and here opinions have evolved to fit the evidence. The most plausible hypothesis I know of could be called the swamp ape theory; I hypothesised it in the 1970s, because I know something about the conditions, but it is now fairly mainstream. If you like, I could post a summary, both of that theory and of (possibly unpublished) reasons why some established theories are obviously bollocks.

    I have also spoken to paleoanthropologists, and they have confirmed that this is more a matter of dogma and heresy than science. There ARE tests for and against, but the problem with paleoanthropology is that you can't obtain evidence to order and early humans were sparsely distributed.

    752:

    Just as modern politics seems to be derived from satire, leading edge physics seems to have derived from fantasy. At that level, everything that is claimed is a deduction from an extrapolation from a formulation that fits known observations.

    And, even now, the discrepancies between quantum mechanics and general relativity have not yet been full reconciled; when I have spoken to someone who has claimed they have been, it has all come down to them saying "it's how we describe it; the other camp will admit defeat in due course." From both camps, of course ....

    753:

    Esepcially as regards "String TheoryWoo" of course

    754:

    Here's one cheap little article (written by a reputable paleontologist) about the problems with the aquatic ape: http://www.sciencefocus.com/feature/life/aquatic-apes .

    The bigger problem is that early hominid fossils have this bad habit of being found in and around savanna and forest habitats. This is despite the fact that aquatic environments are actually more favorable to fossilization than are upland ones.

    755:

    And the reason for that is just statistical: two quantum states can only interfere if they become exactly identical. Easy to do with a single photon or a single atom. But a whole world, a massively complex entangled state of 10^30 particles that you have no control over? It's not going to happen.

    And we're back at the infinite local dimensions in probability space problem with interactions (in photons and gravitons/warped space time) rippling out at light speed, rather than a single 3+1 worldline with a mechanism for collapsing superpositions.

    Oh well.

    756:

    Which also points out the OTHER problem with the many-worlds mystical "solution" to QM problems ( As opposed to the Cophenhagen mysticism, that is, of course ) Namely, that at every splitting of the worlds, matter &/or energy is being created ex nihilo so as to provide a new universe ( or even a pocket one ) on its own to go off & "do its thing"..... Maybe that's where all the missing matter & energy in the observable unboverse has gone? Or maybve not. Like I said a short while back at # 748 - "we" are missing something. Probably something obvious, right in front of our faces, but which requires a different viewpoint.

    757:

    Bugger - misplaced angle-bracket there

    [[ Fixed. As are errors in the previous two comments that Heteromeles made - mod ]]

    758:

    Problem? That's just life.

    759:

    There is no creation of mass/energy. I already explained that in my comment 544.

    Come on, if there were a problem with conservation of energy Many-Worlds would not be taken seriously.

    760:

    Thanks Mods. Much appreciated.

    I'll admit that when I look at many worlds, my first thought is, "Oooh, that's where dark matter/energy come from."

    Problem is, planets and stars are dissipative structures where a lot of stuff happens (unlike the space between the stars, where nothing much happens). If a many worlds multiverse was generating "dark" mass-energy that only interacts via gravity (e.g. alternate worldlines that literally weighed on each other), these worldlines would tend to proliferate and weigh on each other on stars and planets, especially life-bearing planets (because life is an even better entropy generator than stars are, apparently). If too much stuff happened, you might even expect some sort of weird dark matter singularity to happen, as many worlds collapsed into each other. Since we're a highly active world and we haven't noticed gravity increasing locally, this argues that my first thought is junk.

    I think if we're talking about non-interacting world-lines, where the effects of local worldlines nonetheless ripple out across spacetime via photons and gravitons, then we've got an infinity of infinities problem, because all possible combinations of local worldlines interacting should happen. That's kinda messy too.

    Or we could just talk about collapse of superpositions caused by observation, and a single universe wherein God's only function is to observe quantum interactions and cause decoherence. It sounds stupid, but when you look at the gymnastics you have to go through to do away with it, it is simpler. (And note that this take on the divine has a bit more in common with Yog Sothoth than with any big bearded sky fairy).

    761:

    "Cheap little article" is about right. I could do an equally damning article about the favoured savanna theories, even WITHOUT being so shoddy.

    Firstly, he attacks the original theory (which even Morgan agrees was misguided) and, secondly, he makes a false statement about the possibility of a swamp phase. It is the ONLY hypothesis for the development of obligate bipedalism I have seen that doesn't conflict with known facts (such as Darwinian selection) - no, that doesn't prove it's right, but it damn well DOES show that it ought to be treated at least as seriously as the hypotheses with known flaws!

    Your last paragraph is, on the face of it, quite a strong argument - but has some quite serious weaknesses. One point is that we are quite likely considering a single, small area - which might well have been deeply covered since or simply not investigated. Take, for example, a plausible candidate near where I used to live - Lake Bangweulu - I don't know how much archaeology has been done on the bottom of the swamp, but there had been damn-all pretty recently.

    The trouble about relatively rare fossils is that they are identified only when they come to light - it just isn't feasible to investigate all plausible areas, because it costs too much.

    762:

    Um. It is currently believed that there is no creation of mass/energy. On the other hand, if the flaws in the current model proved to be due to something as basic as Hubble's hypothesis of the cause of the red shift being mistaken, all bets would be off.

    763:

    Many-Worlds is proposed as an interpretation of the formalism of quantum mechanics, where energy is conserved as a fundamental law. You are speculating about some alternative to quantum theory, where energy is not conserved, that would also allow a many worlds interpretation. That's pretty far-fetched.

    764:

    You do need to count up the number of quadrupedal primates currently living in swamps before you assign that niche to humans. It's much greater than one.

    You also have to look at current human habitat preferences. Swamp is near the bottom of the list. That's unusual, if you're claiming that we went through a swamp phase. Quite honestly, being bipedal in a swamp is not an advantage unless you also have wings.

    765:

    Eh? Yes, there are, but that's irrelevant, because I am referring to the specific situation. Here is a VERY brief summary.

    The dry season in the savanna means that large surface mammals (LCA and up) either have to live close to the wooded river beds and lakes or be able to travel long distances. Worse, there are several major cursory and pack predators. These are why even baboons stay close to woodland, and chimpanzees are not found on the savanna. Even worse, the intermediate stage between quadrupedality and bipedality is slower and has less endurance than either, so the fact that Homo sapiens has effective locomotion does not explain how that developed.

    Now, consider a wet spell (a century or few) followed by a dry spell. As the forest recedes, the LCA would necessarily have to follow it, or move into river beds or to lakesides. But why would that encourage an upright posture, and later bipedality?

    Firstly, crocodiles dislike shallow water and are visible if they move in it. And, of course, the higher your eyes are, the further you can see. So there is an advantage to being able to hold that position for extended periods. Yes, of course, I am assuming one or more members of a pack keeping watch while the rest forage.

    Secondly, such sluggish, muddy waters hold a LOT of readily-available food, even when the dry terrain holds essentially none. Invertebrates, obviously, but the tilapia are often dense enough that they can be driven and caught with bare hands (yes, I have seen it). Driving with a thorn branch would be better of course, and is an obvious early tool. Equally clearly, the bipedal position is better for wading and much better for using thorn branches that way.

    Did it happen like that? God alone knows. Is it at least plausible? Yes.

    Now compare it with the ALTERNATIVE hypotheses of how bipedality developed, which vary from the obvious nonsense to the highly implausible.

    766:

    There is no creation of mass/energy. Oh, how very convenient I already explained that in my comment 544. Which I might or might-not buy. The whole superstructure looks decidedly wobbly to me, at any rate.

    767:

    Speaking of quantum effects, looks like the quantum communication is becoming more capable.

    https://www.space.com/39438-micius-china-quantum-key-intercontinental.html

    768:

    I'd point out there's some evidence that chimps walk bipedally using the same mechanisms as did humans. This calls into question the whole idea that an intermediate, semi-bipedal form was worse than a chimp or a human. Indeed, chimps moving quadripedally can hit 25 mph, slightly faster than Usain Bolt at 23.25 mph. Arguably, going bipedal slowed us down slightly, but a semi-quadrupedal animal might be even faster than a fully bipedal one. Living more like a ground hornbill evidently led to some mutant ape having more offspring in the distant past.

    Oh, and where do we find the highest concentration of chimpanzees now? in Congolese swamp forests, because that's where the lowest density of humans is right now. If bipedality were that great for swamps, we'd be in there and the chimps would be out with the lions on the savanna, because they could outrun us and leave us for the lions.

    769:

    No, it doesn't; people have looked at the biomechanics, and they were essentially certain that what I said is true. Inter alia, as legs lengthen, quadrupedal locomotion becomes less effective. It's irrelevant whether they can walk upright - what matters (for the savanna theory) is whether they could evade pack cursory/predators and travel long distances for water.

    For heaven's sake, YOU know more about Darwinian selection than THAT! For such a change to occur, there has to be a positive advantage in the direction of change at all stages. Evolution is incremental, not cataclysmic. So what positive benefit are you claiming for EACH STEP during the period of the change?

    The Congolese swampland is NOTHING like the terrain I am talking about; it's embedded in tropical forest, whereas Lake Bangweulu etc. are embedded in savanna. As you know, the year-round availability of food is VASTLY better in the former. Also, my understanding is that there is no evidence for chimpanzees EVER having lived on the savanna! Do you have any evidence for your implication that they used to live there?

    Furthermore, you are responding to a straw man, though I will accept that I may have been unclear. I am not claiming a universal advantage in swampland, nor that we stayed there after bipedalism developed, nor ....

    The hypothesis is that a small population of the LCA was FORCED into a small area of swamps/streambeds by a drying climate, with the (widespread) forest turning into (dry) savanna, and adapted to a largely unoccupied niche, possibly because the obvious ones were already occupied, or because they weren't viable in the location. Speculative, yes, but that scenario is exactly what causes rapid evolution, and I have explained why it would favour the incremental development of bipedalism.

    770:

    Now you're beginning to understand what the problem with the aquatic ape theory is, because you've now narrowed down the origin of bipedality to a single, hypothetical swamp in Africa, where there was apparently a evolutionarily sudden switch of a large number of genes to allow a bunch of chimp-like apes to suddenly go upright, and from there to run out onto the savanna more slowly than their quadripedal ancestors could run, but that they could still outrun a lion, even though a chimp theoretically couldn't pull that off either (chimps hit 25 mph, the human record is 23 mph and most of us are a lot slower than that, while lions hit 30 mph).

    So here are the problems: --What swamp? It's got to be a very, very special one where the water is shallow enough to wade in, but somehow clear enough that the Nile crocodiles can't sneak up on the wading apes (http://www.pressreader.com/south-africa/the-mercury/20150702/281694023437941) (Note the part about how many people get attacked by Nile crocs because they think that crocs don't attack in shallow water). That's getting awfully picky, and I'd rather have someone from Africa comment about how widespread such swamps are. --There's no (known) bipedalism gene, so switching from quadrupedal to bipedal will take awhile, as it involves changes to multiple genes. That's a long-lived swamp too, incidentally. --Bipedalism slows you down a bit, although it might add to endurance. Outjogging a lion is a tricky operation, and I don't think you could pull it off against a hyena or wild dog. --Obligate human bipeds go into swamps generally as a last resort, not a first one, because wading through mud sucks. I've never worked in a true swamp, but I've done plenty of marsh work, and it is NOT fun. There are some people who like it, but I'm not one of them.

    Now, if you want a cheaper answer that gets rid of the aquatic silliness, I think Runner's World accidentally nailed it with https://www.runnersworld.com/run-the-numbers/outrun-the-worlds-most-dangerous-animals (note that you may have to sign up for spamnation to get to this article. Sorry). I stumbled across it when I was looking for chimp top speeds. There are two critical points here: 1. A bunch of predators can outrun humans. Bipedalism isn't about speed. 2. You can still escape a threat if you have a good-enough lead distance, meaning you can see the threat something like 100 meters away and there's a tree nearby, which is what open savannas are all about.

    So if you want the evolutionary story, I'd suggest the sentinel scenario: standing upright gives you a better view, so that you have a better chance of spotting dangers. Living largely in a savanna (as opposed to a dense forest or a grassland) gives you a reasonable chance of spotting a threat at a good distance (the open part) and making it to a nearby tree before the lion beats you there (the tree part). It won't work against a leopard, but being up a tree with a couple of your mates, all of whom are freaking out and ripping branches off to throw, will probably give the spotty kitty pause about coming after you, and will give her multiple targets if she decides to do it anyway (lowering your odds of dying).

    Unlike you (most likely) I have gone quadrupedal, back when I played capoeira. In capoeira, only your hands, feet, and head are supposed to touch the ground, but there's a lot of floor play where you go onto all fours (to kick someone in the head or dodge a kick to the head, for example), and it's not as hard to move in that position as you might think, although it's not as fast as running and you do have to be flexible to accommodate your long legs. Unlike you, I don't have a problem with a quadrupedal/bipedal transition, if the utility of going bipedal isn't about increasing speed but increasing your starting distance from threats, and so long as bipedalism doesn't interfere with your ability to quickly climb a tree (which, according to the analysis of the skeleton of the fully bipedal Australopithecus afarensis, it didn't, although it arguably does now).

    Contrast that with ol' swampy ape, bipedal but crouched in the soup all day, face level with the water while she feels around for those so-nutritious rhizomes, clams, and crayfish you were going on about (you have tried to actually do this, no? I have rooted around in a marsh, at least. I won't mention the mosquitoes or the humidity either). When a crocodile comes up behind her and grabs her ass, she has no warning, because her face was level with the water as she feels around with those too-short arms. Having longer arms would be more useful in this situation, but no, she needs longer legs, because she's going to, erm, splash awkwardly through the muck away from the croc, because bipedality works better in the water. Yeah. There's probably a reason the croc has very short legs, a powerful tail, and adjustable buoyancy. That's what works best in a swamp. If you've ever watched those videos of lions hunting in the Okavango Delta (which is also a swamp), you'll see that they're not slowed down that much by water either.

    Your riposte, sir?

    771:

    Which reminds me that probably the best way for a human to locomote in a swamp is more or less to imitate a crocodile - ie. lie down in it and sort of swim/wriggle along. People, at least modern humans wearing clothes, don't generally do this because eurgh yuck, but it beats having your leg go in up to your knee and then getting even more stuck with the other leg while you try to pull the first leg out.

    772:

    I will start with one point: "That's getting awfully picky, and I'd rather have someone from Africa comment about how widespread such swamps are."

    What about me? AS I SAID, I have lived in some relevant areas - try Choma, Chilang and Mansa - and that was BEFORE the crocodiles were almost completely trapped out. AS I ALSO SAID, try Lake Bangweulu. Look it up. There are others. I will pay you the courtesy to assume that you haven't started to troll, but it's not being easy.

    And, AS I SAID, the upright position gives an advantage for the members of the group who are on watch. You ARE aware that's how baboons and others operate, aren't you?

    And, yes, I have waded through such things, felt for molluscs, and been involved in fish driving in exactly the conditions I described. And I have waded off the coast of Kenya (I was trying to swim, but it was too shallow), and could have picked up 10 Kg of sea slugs etc. off the bottom in an hour. In all cases, I found my bipedalism a great help when doing that.

    No, I do not give this hypothesis more than a good chance of being partially right, but (like Elaine Morgan) I get sick of the hypocrisy of the establishment that applies different standards to its favoured dogmas to everything else.

    Now, let's deal with that runner nonsense.

    "You can still escape a threat if you have a good-enough lead distance, meaning you can see the threat something like 100 meters away and there's a tree nearby, which is what open savannas are all about."

    Thus spoken by someone who has not been there. Most of the drier parts of the savanna do NOT have enough climbable trees to protect a group, even excluding the fact that the development of bipedalism hampers climbing ability. They tend to be very scrubby, impossibly thorny or baobabs, and are often sparse. The climbable trees are concentrated around stream beds, on the kopjes etc. There are reasons baboons tend not to go far from such things, and that's one of them.

    And, AS I SAID, I was talking about a dry spell. In such conditions, the trees tend to get sparser, scrubbier and less useful. And it is well-known there have been long periods of wetter and drier spells in the relevant areas.

    Also, claiming that, because modern, fully bipedal humans can run AND WALK LONG DISTANCES fairly well, an intermediate form could, is exactly the conflict with Darwinian evolution I am talking about. It's complete bollocks.

    As far as I know, not one single animal is known to have evolved cursorial bipedal locomotion in all prehistory in a context where cursorial predators were a serious threat. There is a good evolutionary reason for this, which is that the intermediate form is less efficient. Hopping is completely different.

    Furthermore, you ignored the point about food and water. There are MAJOR problems getting enough food towards the end of the dry season for an LCA, chimp or human. Even baboons have trouble. And water: there are good reasons that almost all (all?) savanna mammals either burrow or can walk long distances. In a dry spell, most of the savanna would have been a completely no-go area for the LCA, a chimp or a half-bipedal ape.

    773:

    I think dinosaurs would rather strongly disagree with you about the idea that bipedal locomotion could not evolve in a context where cursorial predators were a serious threat. Look up how dinosaurs rose to dominance in the Triassic.

    As for Elaine Morgan, her problem is the same as yours: shifting context every time someone points out that there's a problem with some part of your hypothesis, just so you can be right. Yes, paleo-anthropologists can (and have) been assholes to people like Louis Leakey, but saying that anyone disdained by the establishment is right is the classic Galileo defense.

    Now, we've bounced from swamps to sea coasts, and we've avoided every big swamp (say in the Congo) in favor of...what? The Okavango?

    Yes, I read Elaine Morgan's book and liked it. The aliens in my first book were designed as a species became much more sapient as amphibians, and some of the behaviors she described were in there (like women having long hair so that their children could hold onto their heads and get a ride). Then I read the refutations and watched the fossil evidence pile up, and I think that side more correct. That's all. Science is about falsifiability. When that late Miocene aquatic ape fossil turns up, I'll admit I'm wrong.

    Finally, it's worth looking at the late Miocene, which is when Antarctica iced up and the world got colder and drier, heading towards the ice ages. This is the time when hominids split off from other apes, and it's also the time when savannas with C4 grasses spread across East Africa. If I wanted to bet on why an ape would settle the savanna, I'd simply guess that its ancestors were there during the previous forest period, and some took advantage of the shift by going bipedal. Why they didn't turn into the pongoid equivalents of baboons I have no idea, but that's evidently what happened.

    774:

    And, AS I SAID, the upright position gives an advantage for the members of the group who are on watch.

    Which to me is an argument for the seaside, not the swamp. Sentinels only work for groups, and AFAIK there aren't any medium to large reptiles, mammals, or birds that live in swamps and form groups.

    As far as I know, not one single animal is known to have evolved cursorial bipedal locomotion in all prehistory in a context where cursorial predators were a serious threat.

    When restricted to mammals and ignoring dinosaurs and birds, true. However there aren't any examples of mammals evolving cursorial bipedal locomotion other than hominids, full stop. So it's just as true to say there aren't any examples of mammals evolving bipedal locomotion in swamps. We are highly improbable mammals.

    775:

    Plato's definition of a human: A (large) featherless biped - with flat nails ....

    776:

    I'm NOT ignoring dinosaurs and birds! Neither evolved their bipedal locomotion in a context where they had to run from cursorial predators. There weren't any at the time of the former, and birds learnt while they could still fly. Dammit, you can see the intermediate stage today, with birds that prefer to run but will fly if they have to.

    And, yes, we are highly improbable animals (we don't have to restrict it to mammals). My point is that the hypothesis I am describing is plausible, because it doesn't conflict with any known facts, and the establishment's hypotheses (they have several) aren't, because they do. Given the probabilities involved, there had to be some POSITIVE OVERALL advantage over a long period for developing increasing bipedality. The position that "WE don't have to justify our claims because WE are the establishment" just isn't science.

    My hypothesis is that it started with foraging under the conditions I described (which fit with what we know of central Africa) and turned into obligate bipedal locomotion later.

    Aside to Greg Tingey: I can also explain why the savanna hunter theory for the development of hairlessness is implausible; I favour the reduction in parasite load following the invention of clothing (i.e. skins).

    777:

    All right, you ARE trolling. You are committing EXACTLY the offences you accuse me of, only worse.

    Let's consider your claim. You assert that a savanna-living group of LCAs could run to the nearest trees to get away from predators. Even ignoring is problems with that that I mentioned, it explains the evolutionary pressure to REDUCE climbing ability (which is what happened) exactly HOW?

    Look at YOUR first paragraph of #770 and my second one of #776. You are simultaneously claiming that the wading ape hypothesis is wrong because this needs too much evolutionary change and claiming that no evolutionary pressure for bipedalism is needed.

    778:

    To clarify, you're advocating the hypothesis that apes evolved a bipedal aquatic form for wading, then as soon as that bipedalism evolved, they almost completely left the water and went into a landscape of gallery forests and savannas and became humans there, studiously avoiding living in aquatic environments as much as they could until at least Homo erectus evolved and certainly in large proportion to the modern day.

    Conversely, I note that chimps show the same general mechanism of walking as humans do. They just don't do it so much because it's inefficient. That's evidence that the mechanism we use for bipedalism was present in the common ancestor of chimps and humans, even though it was quadrupedal. That's one hurdle down for the evolution of bipedality.

    Second hurdle: what would cause apes to go bipedal? Global climate change at the end of the Miocene (Antarctica icing up) caused the appearance and spread of savannas in Africa as the climate got drier and colder (not wetter, so fewer lakes, rivers, and marshes), and the savannas replaced the forests that used to be there. The subsequent appearance of a savanna-dwelling ape lineage seems unremarkable, because most animal lineages evolved savanna-dwelling forms in the late Miocene and Pliocene. What's weird is that those apes didn't converge with baboons on a quadrupedal savanna lifestyle, and instead went in for bipedalism.

    We both agree that bipedalism slows mammalian runners down, so any argument based on increased running speed from bipedalism is counterfactual. We both agree that being able to see further is the ultimate advantage, because it allows bipeds to avoid danger.

    At this point we split. You want to invoke a wading phase in the earliest human ancestry, --despite the lack of fossil evidence for this, --despite the modern diversity of swamp-dwelling primates (which range from lemurs to apes), all of which are quadrupedal and many of which swim, and a few of which gather resources under water, --despite the fact that this environment was present for millions of years when apes were around, and was decreasing precisely when humans became bipedal, and --despite the fact that bipedalism isn't actually that useful in the water, either for swimming through the water, wading in mud, or finding submerged food. You point out that bipedality is useful for carrying food, but then again, bipedality is useful for carrying stuff in any environment.

    On the other hand, I see chimps reportedly walking bipedally using the same mechanism as humans do, see a relative advantage to having longer legs and spending more time bipedal in an increasingly open environment, and see a fairly straightforward reason why increasing bipedality would be beneficial as savannas spread. There's no obvious point at which our semi-bipedal ancestors were more clumsy than both apes and humans, so there's no barrier to the evolution of bipedality. If you don't believe me, look at the arm length of Australopithecines. Proportionally the early ones had much longer arms than we do, but they were just as bipedal as we are.

    There's no trolling here. The aquatic ape hypothesis just doesn't work as well.

    Finally, if you want to understand the dinosaur argument, I'd suggest you read Dawn of the Dinosaurs, which is an accessible history of the evolution of dinosaurs in the Triassic. The relevant points are: --dinosaurs in the Triassic, when they first evolved, were far from the only large land animals, and the early ones were quite small. --While I'm not clear that ALL dinosaurs were initially bipedal, that's what I remember. Certainly prosauropods started off bipedal, theropods started off bipedal and (Spinosaurus aside) stayed that way, and I believe that ceratopsians were initially bipedal.
    --These small, bipedal dinosaurs showed up in an environment where the largest predators (IIRC) were all large, quadrupedal relatives of modern crocodiles. Indeed, it's not clear, at least to me, why crocs didn't take over the world in the Triassic. --Birds evolved from theropods in the late Jurassic ninety-odd million years latter, and they inherited their bipedality from their theropod ancestors. Bird evolution is irrelevant to this story.

    The only point of bringing up dinosaur evolution in the Triassic is that it's an example of cursorial bipedalism evolving in the face of predation by larger quadrupeds. Therefore it's entirely possible.

    I hope this clarifies things.

    779:

    And this interference is very different from an interaction. Interaction is about some charged or massive particle exerting force on another. Interference is about particles getting into exactly the same quantum state except for the phase, which then makes make the terms of the superposition cancel out or reinforce each other.

    In this kind of discussion, it seems that some mathematics is necessary. With words, you get into all kinds of problems, such as assuming that because a space is curved, it's essential to postulate extra dimensions to contain it. Or, I suppose, to hold the Many Worlds ...

    So to help me understand your posts: is the picture below a correct description of "particles getting into exactly the same quantum state except for the phase"? Can the two particles I've drawn interfere?

    If their state has to include position and velocity, I presume not, since in my picture these are different.

    780:

    Your last sentence is correct: the quantum state includes everything, even position and velocity. For the particles to interfere they must be completely indistinguishable, on the fundamental level. So there wouldn't be interference in your drawing.

    This is quite hard to do, actually, and the usual interference experiments - double-slit interferometer and Mach-Zender interferometer - are about a single particle interfering with itself. It is kind of cheating: you don't really know the quantum state of the particle, but since it is a single particle there is nothing to differ. You then split it into two paths, and now the only difference between the quantum states is just the path degree of freedom; when you recombine the paths, this difference disappears, and you have interference.

    But it is possible to interfere two different particles. The best example is the famous Hong-Ou-Mandel interferometer. Even then, you have to do a lot of work to synchronize the sources of the particles.

    781:

    But it is possible to interfere two different particles. The best example is the famous Hong-Ou-Mandel interferometer. Even then, you have to do a lot of work to synchronize the sources of the particles.

    Thanks. So that clears one thing up for me. One can interfere more than one particle: I don't need to think only of double-slit experiments and the like.

    Next question: where can I find something that explains what kind of mathematical entity a "world" in the Many-Worlds interpretation is?

    782:

    Do such places actually exist on Earth?

    https://en.wikipedia.org/wiki/Perchlorate

    Naturally occurring perchlorate ... can be found commingled with deposits of sodium nitrate in the Atacama Desert of northern Chile. Also Lubbock, Texas and Florida, produced by lightning discharges in the presence of chloride.

    https://en.wikipedia.org/wiki/Perchlorate#Contamination_in_environment

    783:

    I don't know how much mathematics are you interested in; it can get quite hairy. I think this paper does a good job of being precise without getting bogged down in details. See sections 3 and 4.

    In a nutshell, a world is a branch of the universal wavefunction that hardly interferes with other branches, and that follows approximately the laws of classical mechanics. Mathematically, it is a vector of complex numbers describing a massive amount of particles with very little entanglement between them, and such that its inner product with other such vectors is very small.

    784:

    That's a nice paper. I think it's improved my understanding of emergence too. Equations (5) and (6) are the key ones, it seems. Now I need to understand decoherence well enough to see what (2) and (3) are telling me.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on January 2, 2018 10:33 AM.

    PSA: Please don't nominate the Laundry Files for a Best Series Hugo Award (this year) was the previous entry in this blog.

    New Book Week! is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda