Back to: PSA: Please don't nominate the Laundry Files for a Best Series Hugo Award (this year) | Forward to: New Book Week!

Dude, you broke the future!

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)

Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.

Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:

a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.

—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

What do AIs want?

What do our current, actually-existing AI overlords want?

Elon Musk—who I believe you have all heard of—has an obsessive fear of one particular hazard of artificial intelligence—which he conceives of as being a piece of software that functions like a brain-in-a-box)—namely, the paperclip maximizer. A paperclip maximizer is a term of art for a goal-seeking AI that has a single priority, for example maximizing the number of paperclips in the universe. The paperclip maximizer is able to improve itself in pursuit of that goal but has no ability to vary its goal, so it will ultimately attempt to convert all the metallic elements in the solar system into paperclips, even if this is obviously detrimental to the wellbeing of the humans who designed it.

Unfortunately, Musk isn't paying enough attention. Consider his own companies. Tesla is a battery maximizer—an electric car is a battery with wheels and seats. SpaceX is an orbital payload maximizer, driving down the cost of space launches in order to encourage more sales for the service it provides. Solar City is a photovoltaic panel maximizer. And so on. All three of Musk's very own slow AIs are based on an architecture that is designed to maximize return on shareholder investment, even if by doing so they cook the planet the shareholders have to live on. (But if you're Elon Musk, that's okay: you plan to retire on Mars.)

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don't make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it's as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals—notably maximizing revenue—as a side-effect of the pursuit of their overt goal. But sometimes they try instead to manipulate the regulatory environment they operate in, to ensure that money flows towards them regardless.

Human tool-making culture has become increasingly complicated over time. New technologies always come with an implicit political agenda that seeks to extend its use, governments react by legislating to control the technologies, and sometimes we end up with industries indulging in legal duels.

For example, consider the automobile. You can't have mass automobile transport without gas stations and fuel distribution pipelines. These in turn require access to whoever owns the land the oil is extracted from—and before you know it, you end up with a permanent occupation force in Iraq and a client dictatorship in Saudi Arabia. Closer to home, automobiles imply jaywalking laws and drink-driving laws. They affect town planning regulations and encourage suburban sprawl, the construction of human infrastructure on the scale required by automobiles, not pedestrians. This in turn is bad for competing transport technologies like buses or trams (which work best in cities with a high population density).

To get these laws in place, providing an environment conducive to doing business, corporations spend money on political lobbyists—and, when they can get away with it, on bribes. Bribery need not be blatant, of course. For example, the reforms of the British railway network in the 1960s dismembered many branch services and coincided with a surge in road building and automobile sales. These reforms were orchestrated by Transport Minister Ernest Marples, who was purely a politician. However, Marples accumulated a considerable personal fortune during this time by owning shares in a motorway construction corporation. (So, no conflict of interest there!)

The automobile industry in isolation isn't a pure paperclip maximizer. But if you look at it in conjunction with the fossil fuel industries, the road-construction industry, the accident insurance industry, and so on, you begin to see the outline of a paperclip maximizing ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year—that's the global death toll from automobile accidents according to the world health organization: it rivals the first world war on an ongoing basis—as side-effects of its drive to sell you a new car.

Automobiles are not, of course, a total liability. Today's cars are regulated stringently for safety and, in theory, to reduce toxic emissions: they're fast, efficient, and comfortable. We can thank legally mandated regulations for this, of course. Go back to the 1970s and cars didn't have crumple zones. Go back to the 1950s and cars didn't come with seat belts as standard. In the 1930s, indicators—turn signals—and brakes on all four wheels were optional, and your best hope of surviving a 50km/h crash was to be thrown clear of the car and land somewhere without breaking your neck. Regulatory agencies are our current political systems' tool of choice for preventing paperclip maximizers from running amok. But unfortunately they don't always work.

One failure mode that you should be aware of is regulatory capture, where regulatory bodies are captured by the industries they control. Ajit Pai, head of the American Federal Communications Commission who just voted to eliminate net neutrality rules, has worked as Associate General Counsel for Verizon Communications Inc, the largest current descendant of the Bell telephone system monopoly. Why should someone with a transparent interest in a technology corporation end up in charge of a regulator for the industry that corporation operates within? Well, if you're going to regulate a highly complex technology, you need to recruit your regulators from among those people who understand it. And unfortunately most of those people are industry insiders. Ajit Pai is clearly very much aware of how Verizon is regulated, and wants to do something about it—just not necessarily in the public interest. When regulators end up staffed by people drawn from the industries they are supposed to control, they frequently end up working with their former officemates to make it easier to turn a profit, either by raising barriers to keep new insurgent companies out, or by dismantling safeguards that protect the public.

Another failure mode is regulatory lag, when a technology advances so rapidly that regulations are laughably obsolete by the time they're issued. Consider the EU directive requiring cookie notices on websites, to caution users that their activities were tracked and their privacy might be violated. This would have been a good idea, had it shown up in 1993 or 1996, but unfortunately it didn't show up until 2011, by which time the web was vastly more complex. Fingerprinting and tracking mechanisms that had nothing to do with cookies were already widespread by then. Tim Berners-Lee observed in 1995 that five years' worth of change was happening on the web for every twelve months of real-world time; by that yardstick, the cookie law came out nearly a century too late to do any good.

Again, look at Uber. This month the European Court of Justice ruled that Uber is a taxi service, not just a web app. This is arguably correct; the problem is, Uber has spread globally since it was founded eight years ago, subsidizing its drivers to put competing private hire firms out of business. Whether this is a net good for society is arguable; the problem is, a taxi driver can get awfully hungry if she has to wait eight years for a court ruling against a predator intent on disrupting her life.

So, to recap: firstly, we already have paperclip maximizers (and Musk's AI alarmism is curiously mirror-blind). Secondly, we have mechanisms for keeping them in check, but they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with. Finally, unlike the naive vision of a paperclip maximizer, existing AIs have multiple agendas—their overt goal, but also profit-seeking, and expansion into new areas, and to accomodate the desires of whoever is currently in the driver's seat.

How it all went wrong

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. Everywhere I look I see voters protesting angrily against an entrenched establishment that seems determined to ignore the wants and needs of their human voters in favour of the machines. The Brexit upset was largely the result of a protest vote against the British political establishment; the election of Donald Trump likewise, with a side-order of racism on top. Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

Now, this is CCC, and we're all more interested in computers and communications technology than this historical crap. But as I said earlier, history is a secret weapon if you know how to use it. What history is good for is enabling us to spot recurring patterns in human behaviour that repeat across time scales outside our personal experience—decades or centuries apart. If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

(Note: Cory Doctorow has a contrarian thesis: The dotcom boom was also an economic bubble because the dotcoms came of age at a tipping point in financial deregulation, the point at which the Reagan-Clinton-Bush reforms that took the Depression-era brakes off financialization were really picking up steam. That meant that the tech industry's heady pace of development was the first testbed for treating corporate growth as the greatest virtue, built on the lie of the fiduciary duty to increase profit above all other considerations. I think he's entirely right about this, but it's a bit of a chicken-and-egg argument: we wouldn't have had a commercial web in the first place without a permissive, deregulated financial environment. My memory of working in the dot-com 1.0 bubble is that, outside of a couple of specific environments (the Silicon Valley area and the Boston-Cambridge corridor) venture capital was hard to find until late 1998 or thereabouts: the bubble's initial inflation was demand-driven rather than capital-driven, as the non-tech investment sector was late to the party. Caveat: I didn't win the lottery, so what do I know?)

The ad-supported web that we live with today wasn't inevitable. If you recall the web as it was in 1994, there were very few ads at all, and not much in the way of commerce. (What ads there were were mostly spam, on usenet and via email.) 1995 was the year the world wide web really came to public attention in the anglophone world and consumer-facing websites began to appear. Nobody really knew how this thing was going to be paid for (the original dot com bubble was all largely about working out how to monetize the web for the first time, and a lot of people lost their shirts in the process). And the naive initial assumption was that the transaction cost of setting up a TCP/IP connection over modem was too high to be supported by per-use microbilling, so we would bill customers indirectly, by shoving advertising banners in front of their eyes and hoping they'd click through and buy something.

Unfortunately, advertising is an industry. Which is to say, it's the product of one of those old-fashioned very slow AIs I've been talking about. Advertising tries to maximize its hold on the attention of the minds behind each human eyeball: the coupling of advertising with web search was an inevitable outgrowth. (How better to attract the attention of reluctant subjects than to find out what they're really interested in seeing, and sell ads that relate to those interests?)

The problem with applying the paperclip maximizer approach to monopolizing eyeballs, however, is that eyeballs are a scarce resource. There are only 168 hours in every week in which I can gaze at banner ads. Moreover, most ads are irrelevant to my interests and it doesn't matter how often you flash an ad for dog biscuits at me, I'm never going to buy any. (I'm a cat person.) To make best revenue-generating use of our eyeballs, it is necessary for the ad industry to learn who we are and what interests us, and to target us increasingly minutely in hope of hooking us with stuff we're attracted to.

At this point in a talk I'd usually go into an impassioned rant about the hideous corruption and evil of Facebook, but I'm guessing you've heard it all before so I won't bother. The too-long-didn't-read summary is, Facebook is as much a search engine as Google or Amazon. Facebook searches are optimized for Faces, that is, for human beings. If you want to find someone you fell out of touch with thirty years ago, Facebook probably knows where they live, what their favourite colour is, what size shoes they wear, and what they said about you to your friends all those years ago that made you cut them off.

Even if you don't have a Facebook account, Facebook has a You account—a hole in their social graph with a bunch of connections pointing into it and your name tagged on your friends' photographs. They know a lot about you, and they sell access to their social graph to advertisers who then target you, even if you don't think you use Facebook. Indeed, there's barely any point in not using Facebook these days: they're the social media Borg, resistance is futile.

However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.

This brings me to another interesting point about computerized AI, as opposed to corporatized AI: AI algorithms tend to embody the prejudices and beliefs of the programmers. A couple of years ago I ran across an account of a webcam developed by mostly-pale-skinned silicon valley engineers that have difficulty focusing or achieving correct colour balance when pointing at dark-skinned faces. That's an example of human-programmer-induced bias. But with today's deep learning, bias can creep in via the data sets the neural networks are trained on. Microsoft's first foray into a conversational chatbot driven by machine learning, Tay, was yanked offline within days because when 4chan and Reddit based trolls discovered they could train it towards racism and sexism for shits and giggles.

Humans may be biased, but at least we're accountable and if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

AI-based systems that concretize existing prejudices and social outlooks make it harder for activists like us to achieve social change. Traditional advertising works by playing on the target customer's insecurity and fear as much as on their aspirations, which in turn play on the target's relationship with their surrounding cultural matrix. Fear of loss of social status and privilege is a powerful stimulus, and fear and xenophobia are useful tools for attracting eyeballs.

What happens when we get pervasive social networks with learned biases against, say, feminism or Islam or melanin? Or deep learning systems trained on data sets contaminated by racist dipshits? Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts?

I think you already know the answer to that.

Look to the future (it's bleak!)

Now, if this is sounding a bit bleak and unpleasant, you'd be right. I write sci-fi, you read or watch or play sci-fi; we're acculturated to think of science and technology as good things, that make our lives better.

But plenty of technologies have, historically, been heavily regulated or even criminalized for good reason, and once you get past the reflexive indignation at any criticism of technology and progress, you might agree that it is reasonable to ban individuals from owning nuclear weapons or nerve gas. Less obviously: they may not be weapons, but we've banned chlorofluorocarbon refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UV-B radiation. And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave.

Nerve gas and leaded gasoline were 1930s technologies, promoted by 1930s corporations. Halogenated refrigerants and nuclear weapons are totally 1940s, and intercontinental ballistic missiles date to the 1950s. I submit that the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs have broken down.

Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn't an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

(Note that I do not have a solution to the regulatory problems I highlighted earlier, in the context of AI. This essay is polemical, intended to highlight the existence of a problem and spark a discussion, rather than a canned solution. After all, if the problem was easy to solve it wouldn't be a problem, would it?)

Firstly, Political hacking tools: social graph-directed propaganda

Topping my list of dangerous technologies that need to be regulated, this is low-hanging fruit after the electoral surprises of 2016. Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to indentify voters' political affiliations. They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues. The tools developed by web advertisers to sell products have now been weaponized for political purposes, and the amount of personal information about our affiliations that we expose on social media makes us vulnerable. Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyberwar attack via weaponized social media, as was the most recent French presidential election.

I'm biting my tongue and trying not to take sides here: I have my own political affiliation, after all. But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls. And this won't simply be billionaires like the Koch brothers and Robert Mercer in the United States throwing elections to whoever will hand them the biggest tax cuts. Russian military cyberwar doctrine calls for the use of social media to confuse and disable perceived enemies, in addition to the increasingly familiar use of zero-day exploits for espionage via spear phishing and distributed denial of service attacks on infrastructure (which are practiced by western agencies as well). Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned.

(By the way, I really hate the cyber- prefix; it usually indicates that the user has no idea what they're talking about. Unfortunately the term 'cyberwar' seems to have stuck. But I digress.)

Secondly, an adjunct to deep learning targeted propaganda is the use of neural network generated false video media.

We're used to Photoshopped images these days, but faking video and audio is still labour-intensive, right? Unfortunately, that's a nope: we're seeing first generation AI-assisted video porn, in which the faces of film stars are mapped onto those of other people in a video clip using software rather than a laborious human process. (Yes, of course porn is the first application: Rule 34 of the Internet applies.) Meanwhile, we have WaveNet, a system for generating realistic-sounding speech in the voice of a human speaker the neural network has been trained to mimic. This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it'll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don't like doing something horrible.

We're already seeing alarm over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels and adding their own advertising and keywords. Many of these channels are shaped by paperclip-maximizer advertising AIs that are simply trying to maximize their search ranking on YouTube. Add neural network driven tools for inserting Character A into Video B to click-maximizing bots and things are going to get very weird (and nasty). And they're only going to get weirder when these tools are deployed for political gain.

We tend to evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us—and we're already too vulnerable to fake news as it is. Soon they'll come for us, armed with believable video evidence. The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Paperclip maximizers that focus on eyeballs are so 20th century. Advertising as an industry can only exist because of a quirk of our nervous system—that we are susceptible to addiction. Be it tobacco, gambling, or heroin, we recognize addictive behaviour when we see it. Or do we? It turns out that the human brain's reward feedback loops are relatively easy to game. Large corporations such as Zynga (Farmville) exist solely because of it; free-to-use social media platforms like Facebook and Twitter are dominant precisely because they are structured to reward frequent interaction and to generate emotional responses (not necessarily positive emotions—anger and hatred are just as good when it comes to directing eyeballs towards advertisers). "Smartphone addiction" is a side-effect of advertising as a revenue model: frequent short bursts of interaction keep us coming back for more.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. Dopamine Labs is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable. It goes a bit beyond automated A/B testing; A/B testing allows developers to plot a binary tree path between options, but true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn't a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Let me give you a more specific scenario.

Apple have put a lot of effort into making realtime face recognition work with the iPhone X. You can't fool an iPhone X with a photo or even a simple mask: it does depth mapping to ensure your eyes are in the right place (and can tell whether they're open or closed) and recognize your face from underlying bone structure through makeup and bruises. It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI, and it can see where your eyeballs are pointing. The purpose of this is to make it difficult for a phone thief to get anywhere if they steal your device. but it means your phone can monitor your facial expressions and correlate it against app usage. Your phone will be aware of precisely what you like to look at on its screen. With addiction-seeking deep learning and neural-network generated images, it is in principle possible to feed you an endlessly escallating payload of arousal-maximizing inputs. It might be Facebook or Twitter messages optimized to produce outrage, or it could be porn generated by AI to appeal to kinks you aren't even consciously aware of. But either way, the app now owns your central nervous system—and you will be monetized.

Finally, I'd like to raise a really hair-raising spectre that goes well beyond the use of deep learning and targeted propaganda in cyberwar.

Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls around Me. (Spoiler: Apple pulled it like a hot potato when word got out.) The app works out where the user is using GPS, then queried FourSquare and Facebook for people matching a simple relational search—for single females (per Facebook) who have checked in (or been checked in by their friends) in your vicinity (via FourSquare). The app then displayed their locations on a map, along with links to their social media profiles.

If they were doing it today the interface would be gamified, showing strike rates and a leaderboard and flagging targets who succumbed to harassment as easy lays. But these days the cool kids and single adults are all using dating apps with a missing vowel in the name: only a creeper would want something like "Girls around Me", right?

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don't worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

Imagine you're young, female, and a supermarket has figured out you're pregnant by analysing the pattern of your recent purchases, like Target back in 2012.

Now imagine that all the anti-abortion campaigners in your town have an app called "babies at risk" on their phones. Someone has paid for the analytics feed from the supermarket and the result is that every time you go near a family planning clinic a group of unfriendly anti-abortion protesters engulfs you.

Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.) Or imagine you're in Pakistan and Christian/Muslim tensions are mounting, or you're in rural Alabama, or ... the possibilities are endless

Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.

And the worst bit of this picture?

Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, jews, or muslims; it's just a paperclip maximizer doing what it does—and you are the paper.



Not to detract too much from your broader (terrifying) point, but does Cambridge Analytica actually match up to its own hype? I was under the impression that it was actually a ramshackle scam that was largely ignored by the campaigns it claimed it worked for (


does Cambridge Analytica actually match up to its own hype?

I'm not sure that matters — even if they don't, the real thing will be along soon enough.

It's fairly obvious that a lot of electoral meddling via social media took place in 2016/17, much of it automated and relying on mobilized bot armies; it's a fair bet that DARPA and/or the NSA black budget will now be funneling on the order of billions of dollars into (a) making the tech work reliably, and (b) figuring out how to defend against it. It's an early example of what Vernor Vinge named a "DWIW" weapon in "Rainbow's End" ("Do What I Want" AI).


The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone? It feels like a lot of these problems can be - if not solved - at least lessened by more control over what information you 'transmit' out.

As you point out, facebook can fill in the holes in its victim-graph using information from your neighbours. Perhaps we just need to force them to have ridiculously fine-grained permissions on data usage, so that they need permissions from each victim for each use of their (say) date of birth.

If there is one thing that gums up a functioning slow-AI, it's a stultifying byzantine bureaucracy!


Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned. Nothing really new here "just" speeded-up & refined. Josef Goebbels or "Saint" Dominic would recognise the methods, immediately!

2 Won't the shit really hit the fan, though if/when Russian ( or whoever ) meddling is proven in eith the US &/or Brexit results?

There are mutterings starting here about the latter, the Brits being just a tad more cynical than the US electorate ( I hope )

Someone else, a philosophical economist saw some of this a long time ago, but his name has been much-taken-in-vain by supposed "followers" of his, who all too plainly, haven't understood a single thing he said: Adam Smith


But you & we are right to be scared. How do we & you ensure that this vital message gets spread around, since the more people understand your message, the higher the chance of preventing this nasty future - suggestions?


So, who cribbed from whom? This looks an awful lot like Ted Chiang's:


I ran across Ted's essay while I was halfway through writing this talk; we think on convergent paths.

I've been writing about this since 2010, though.


Good enough :-) I do subscribe to the theory that ideas want to be born, and will find multiple channels to do so!


Went off the deep end a little bit at the end there didn't you Chuck? Was totally on board with you right up until the last sentence pretty much.... Maybe I missed something, but what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups?

sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?


Making it so that you can use the internet from your phone and at the same time remain hidden is not trivial. Even if you do turn everything off so that nothing is explicitly shared, it's still possible to geolocate your phone remotely if you use it for internet access.

This happens because the phone needs to have an IP address to access the internet. These are not random, and are in most cases handed out by somebody else and not hard-coded to the phone. From the IP address (that is the series of four numbers separated by dots) it's not that hard to get at least some location data. Every time you access a web site, the site gets your IP address.

If you're on cellular data, it's probably locatable at least to which country you are in, because the IP address your phone gets is in a block allocated to that country. It might depend on the configurations of the cellular provider what kind of IP address they show "outside" of their systems, but I think it's still more or less bound to a country.

If you're on some wifi network, the IP address you get from there can usually be located quite easily, if wanted. The IP address might be NATted (so that the real internet "sees" only a single address for the whole wifi), but then its location can be deduced from other data, especially if it's a public one - then the other mobile users could provide their GPS data to the server and that could be connected to your device if it's on the same wifi.

Of course one can try to go around this IP address system by using Tor, but it's not that easy on a phone, and would probably have an effect on the battery life. This is because it uses encryption and is somewhat resource intensive.


"what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups" Arms manufacturers, prison owners, spyware manufacturers, and that's just the obvious candidates who directly profit from violence.

Once someone develops a perfect gaydar app (for example, and assuming that such a thing is possible), even if they only ever intended it for peaceful purposes, it will end up being used for evil (possibly someone else will clone the idea etc.), that's just what people are like.


sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?

Corporations as they currently exist ain't interested in massacring their customers. But we have any number of corporations who currently sell tools that are used to murder foreigners in bulk.

(Also, we seem to be heading towards virtual corporations — I can't begin to describe how bad I think this idea is — qua AIs, and who the fuck knows what their motives will be? Or what the scale factor for a minimum viable product will be in this problem domain? It's one thing if it takes 10,000 workers to release a genocide app, and another thing entirely if three hackers can barf one up in a weekend-of-code session. It's like the common denominator of a Eurofighter Typhoon II and a quadrotor drone carrying half a kilo of semtex knocked up in a back room by ISIS engineers; one of them is vastly more powerful and sophisticated and expensive, but if you're an unprotected human body, in range, and the target, either one can kill you just as dead.)


IIRC, Egypt, the UAE and Saudi police have already been using Grindr to sting gay men in their jurisdictions — cops fake up a profile and arrest anyone who turns up for a date.

Note that in at least one of those states being gay is a capital offense. (Not sure about Egypt and UAE, but it's still a serious felony there.)


I'd like to direct your attention to a very scary detail in the US:

Right now there are strong forces hell-bent on getting corporations the constitutional right to refuse to provide certain kinds of health-care to their employees, because it would violate the corporations (owners) religious beliefs.

I put "owners" in paranthesis, because it is painfully obvious that the very moment they crack that bit open, they will argue that you cant distinguish between a company owned by one religious family, and one owned by 100 religious shareholders.

If corporations win this right, then we're headed straight into "First they came for the gays, [...] colored, [...] irish, ..." territory.

The case currently awaiting smoke from SCOTUS, about a bakery and gay marriages, could settle this either way, either going "sure, a bakery can discriminate if they care to wave their cross" or "if you engage in commerce, you cannot discriminate, no matter what you believe" or somewhere in the middle.

Once somebody gain some human rights, they usually get the rest too, and corporations in USA already have gained the right to influence politics, and religion seems next.

At the end of that slippery slope lies armed corporations who can kill in self-defence.


I thought governments came before other corporations.

"What is the heart but a spring?"---Thomas Hobbes in "Leviathan"

The problem with comparing corporations to AIs is that AIs are beings but not human whereas corporations are human (at least on this planet) but not beings.


I am wondering how the slow AI Corporations are subject to evolution:

They seem to have a fairly short lifetime, and can certainly evolve their processes and methods and copy successful methods from other corporations, and in general their ability to innovate and hence survive seems to be inversely correlated with their age. Presumably the increasingly faster rate of change in markets leads to a faster corporate mutation rate and a shorter individual corporate lifespan and increasing rates of growth for the new disruptors.

Does this lead to a small number of short-lived corporate giants or an explosion of small short-lived corporates?


sure the ad networks provide the tools that your nazi app developers will need, but the app developers still have to be nazis right?

Good thing there's no signs of anti-PC intolerance and that tech isn't unrepresentative of the general population, then(!) ;o)


There's a whole terrifying pile of ultrasonic and audio comms software being deployed lately for everything from autoconfiguration (Chromecast) to fingerprinting to media identification. As basically no platform has mute-by-default for all apps and web content it won't matter if you have location permissions blocked on your phone. The nazi flashmob will be able to hear your web ads singing if they get close.


I am wondering how the slow AI Corporations are subject to evolution

Like all questions in Biology, the answer is "it depends..."

Corporations have some characteristics of biological organisms, but not all of them:

  • They divide, but not endlessly (unlike bacteria)
  • They swap information (like bacteria) but can also patent it
  • They grow but can also shrink
  • They die, but can also be 'resurrected' (bankruptcy?)
  • They have analogous parts to cells (employees) but these are both totipotent (in theory) and can swap between corporates

When it comes to ecosystems of corporations, the business sector plays a role, just as the environment of a biological (reef, forest, undersea volcano...). Obviously, things change much faster in I.T. than in Japanese hospitality ( and some sectors allow for much larger companies than others.

You can make all sorts of fanciful analogies - the dotcom bubble was like the Cambrian explosion!! - but I think the only way to get any useful answers would be some kind of simulation of corporate ecosystems. Could make an interesting game.


Religious leaders - always - never forget them ....


Appears UK & US law are heading in different directions here - I'm very glad to say.


OK I'm going to cheat & quote a C_Stross tweet ( And hope he doesn't mind )

The Lipson-Shiu Corporate Type Test:

(My personality type is SCIE. How about you?) I was horrified to come out as ICIE - I think I'll take it again & see what I get... [ pause ] Almost as bad: ILUE


What you're missing is that these corporations have gobs of data for sale. If a big church decides it is their holy mission to provide the names of Gay people to their "conversion squads" that church can just buy a bunch of data, through cut-outs if necessary, and search for:

1.) Men who live in San Francisco/West Hollywood 2.) Have never been married 3.) Are over 35 4.) Post on Grindr

It's not the church's fault if some of the people on the "conversion squads" take their work a little too seriously, or sometimes share data with groups which are a little more hardcore. "We do post guidelines when we put the data online, but some people just don't follow the rules."

(At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.)


"Maybe I missed something, but what corporation wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups?"

There is a huge market for inciting violence -- the Daily Mail, Fox TV, Alex Jones, and so on all, to a greater or lesser extent, make their money from it. Every newspaper that has ever called for a war in its headlines (which is most of them) is a "corporation which] wants its users/eyeballs/ad-targets to start inflicting acts of violence on other groups".


I thought governments came before other corporations.

Governments — those that predate corporations (and, loosely, the 30 years war and the Treaty of Westphalia that established the requirements of the modern nation-state) don't structurally resemble corporations: they were almost invariably some species of despotic monarchy. (Which might be seen as a family-owned business, but that's stretching the metaphor to breaking point.)


Not all governments were despotic monarchies, even in the ancient world. The Roman Republic is a clear counterexample, as the name suggests (res publica, the thing that belongs to the people), and a strikingly successful one for a long time. Athens from Solon on was a less stable and successful example, but still one that had a substantial impact for a while. Of course they also both made a point of contrasting themselves with "Oriental despotism," exaggerating the differences for propaganda, but there were real differences.

But I think there's an older example still of AI, and one that in fact you point to in your own address: Gods and religions. The god is the human-recognizable symbol for an entity that outlives individual human beings, that infects human brains and uses them to propagate itself (remember that the origin of the word "propaganda" was the Latin phrase meaning "propagation of the faith"), and that has powers that individual human beings lack. I think it makes sense to view it as an ideational parasite. And such entities have characteristic features that you yourself point to in your discussion of transhumanism. Even ideational systems that start out by immunizing people against gods are at risk for evolving into religions over time; look at what happened with Buddhism, which started out virtually as an anti-religion.


"Nobody in 2007 was expecting a Nazi revival in 2017, right?"

You mean you didn't? Seriously. The signs were all there well by 1997, let alone 2007 :-( The two reasons that most people don't expect easily predictable effects are that (a) they look at the trends and not the underlying facts and (b) they choose to believe the outcome that most matches their worldview.

And w.r.t. your last paragraph, I have been saying for decades "You aren't the customer - you are the commodity." The customers are the organisations they sell their services (and your data) to.

Sorry, but I am afraid you are being too optimistic!


Great essay!

Have wondered whether some 'AI progress' might be slowed down by requiring that such AI be forced to juggle multiple demands/goals. Plus, since multi-tasking is supposed to be such a good thing for human employees to do, may as well get the AIs in on the fun.

What happens when AIs compete against AIs? We should be seeing this already. Such competition/evolution could be hastened by advertisers tightening ad budgets.

Gov't run AI - apart from non-democratic countries, no mention. Curious. Maybe the military will work on this because ever since the science budgets got slashed, they're the only ones with discretionary research spending. Motivation/rationale: the way things are looking climate-wise, the military could use help in optimizing resources for disaster-relief operations. Seriously - there's lots of real-world potential here for beneficial AI including getting rid of a few layers of elected 'Gov't' reps.

Reaction - After CorpA gets 100% of all eyeballs, what do you do if you're CorpB? I'm guessing CorpB won't just fold - so what's the likeliest retaliatory strategy? All it takes is one Hindenburg for an entire new industry to fold.


Just a note:

Scientific names consist of a genus, which is capitalized, followed by a specific epithet, which is lower case, and an authority, so that you know whose species concept is referred to. The genus and specific epithet are typically written in italics or underlines (the latter is mostly used on hand-written specimen labels, because most people don't have good italic handwriting).

So "Homo sapiens sapiens" is properly Homo sapiens ssp. sapiens L. or informally, Homo sapiens sapiens or even Homo s. sapiens. The last is acceptable because the we're the nominotypical subspecies, unlike, say, Neanderthals. The L. is in honor of Linnaeus, who first proposed the species concept we now use for ourselves. Early biologists have their names abbreviated, and no accepted scientific name is older than Linnaeus' publications.

Why care? Well, look at the role of evolution in the arguments of the religious right. They ignore the fact that the first page of the Bible says that the Moon only comes up during the night (something that, if they glanced skyward during about half the month, they'd know was false), accept astronomy, but denigrate evolution because it says that Man was created separately from animals and they can't stand the theory that all eukaryotic species descend from a single ancestor.

A lot of people go along with this scorn of biology without thinking about it, despite the fact that the theory of evolution (now in version 3, Evo Devo) is far more successful than, say, general relativity and quantum mechanics (where we're still arguing whether time exists and whether 96% of the observable universe exists, and if so, what it is). Prejudice is not inversely linked to success, sadly.

Basically, if you want to show support for good science, italicize scientific names and only capitalize the genus. I know that's a bit more typing, but it's an easy way to show you understand and you care.


Tay wasn't Microsoft's first conversational chatbot; Xiaioce was - she's huge on Weibo and WeChat, she got a job presenting the weather on Chinese TV in 2015 and had a book of poetry come out this year. The difference isn't that Xiaoice was any better at filtering (she got those protections about 24 hours after Tay got corrupted); it's just that online culture in China didn't include trolling her the way western online culture optimised for trolling Tay. So I'm wondering what the implications of that are for 'slow AI' in China, where you have that interesting mercantilist government participation in business alongside emerging startup culture with brands like Baidu?


Oh, come off it! The fact that the theory of evolution is so solid does NOT mean that the exact definition of species is, let alone the detailed taxonomy. Mammals are relatively simple, but the Linnaean model does NOT match reality even for them/us (because evolution and inter-fertility are continuous and probabilistic, not discrete and deterministic), so there is inevitably a lot of personal judgement and disagreement. Aside: that's also true of the theory of general relativity versus the exact formula (despite the claims of the black hole divers).

And demanding a particular font in a blog such as this is just plain ridiculous. There are a zillion such conventions in mathematics and other sciences, and it is insane to expect them to be used.

Yes, one can reasonably say that we, Neanderthals and Denisovans are all the same species and Homo sapiens is c. 1,000,000 years old - but it is equally reasonable to say that we are different ones, and HS is only c. 300,000 years old. And variations on those ....


The only quibble I have is that we're talking about AI 3.0 at least, possibly 4.0.

Basically, humans have been joining into groups and using a variety of artificial techniques to augment memories for tens of thousands of years (this is the whole Lynne Kelly thing I went off on a few months ago). This wasn't just ancient superstitious polytheism. For one thing, it's not clear if the old "spirits" were at all the same thing as the Judeo-Christian God. In any case, a tribe with many spirits also had a normal mechanism against subversion: nobody normally had a monopoly on all the knowledge the group collectively needed to survive, so by controlling access to knowledge, there was a check on someone taking authoritarian power.

This was (is) AI 1.0. Only remnants of the ancient, pre-literate originals survive, on reservations in Australia, the Americas, and parts of Russia.

AI 1.0 systems survived the transition to literacy, and they reached their heights with the Roman and other classical empires. Much of their architecture was designed to transmit, preserve, and enforce the memes that bound them together. You can see remnants of this system in North Korea, for example

Normally, when two AIs of 1.0 fought, the stronger subsumed the weaker, either by destroying it's "gods" (which really means attacking all the technology designed to pass on information between generations), or subsuming those gods into a bigger system (as in Rome and China).

AI 2.0 might have started with the Jews refusing to be assimilated, changing their concept of god to God and religion, and continuing on. This fed into the whole "Religions of the book" thing. Are these AI 2.0? Hard to tell, because there's sort of an evolution rather than a revolution.

AI 2.0 definitely showed up when the printing press started making it much easier to store and transmit huge amounts of information. This was about the time that corporations showed up, incidentally.

What we're seeing now is that humans are no longer the only information processing system, and so existing AIs (corporations, nation-states, etc.) are increasingly becoming symbiotic structures composed of multiple systems for processing information and integrating human behavior in groups. We'll see how well it all works out.


Great talk. I just have a few quibbles

  • In the US, the average age of cars was 11.6 years in 2015. I remember reading that it jumped to 13 years in 2017, but I can't find it. Perhaps I'm remembering something incorrectly?

  • Your app scenario is complicated by the fact that app stores are Android and Apple walled gardens. That's the reason that the app idea about finding women near you that you mentioned was not yet resurrected.

  • In general, I think apps in general have followed music, writing, and news sites/blogs in the following structure
  • a. About 1 percent will make enough money for the developer to live on

    b. An extra 4.5 percent will make some pocket money

    c. A portion of the free apps downloaded have a different utility which doesn't require them to make money per se. I mean, your bank doesn't make money off of you using its app, does it?

    d. Most apps make money off of ads. Ask news sites how well that is going.

    e. A huge fraction of apps are template apps, the equivalent of a poorly-made Youtube videos or fanfiction.

  • The Nazi apps you mentioned would really be built as template apps (I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch). That leads to an arms race between template apps and any AI which removes them. This is similar to the fight social media is experiencing now, except that right now the tech giants have the upper hand in policing their app stores.
  • 33:

    It's possible to geolocate your phone if you have cellphone access. One of the ways of geolocation works by triangulation between accessible cell phone tower locations. Internet access is not required.

    I'm not certain the same approach works with WiFi access, but I don't see why it wouldn't, though it might well need a separate implementation.


    Additionally, the requirement of "perfection" isn't reasonable. Things like that work more on "believable"...and then there's the question of "Believable by whom?". Lots of people will accept quite weak evidence if it coincides with their current beliefs. Otherwise most news, fake and otherwise, would be out of business.


    In the bioscience journals that I've copy edited, the style is that you spell out the genus the first time, but abbreviate it thereafter. But I've never seen the epithet abbreviated, in any article I've edited or any reference I've looked up. I'm not saying it never happens, but it doesn't seem to be common.


    With respect to "at the end of that road" may I direct your attention to the song "Joe Hill" and some of the arguments about the origin of the term "copper".

    That isn't the only, or even the most egregious, example of corporations killing people without repercussions. And they aren't all in the US. And they aren't all in the "distant" past.

    Now if you're arguing that they don't currently have the right to kill in defense of themselves, I would ask you why armed security guards exist.

    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone? It feels like a lot of these problems can be - if not solved - at least lessened by more control over what information you 'transmit' out.

    Does not work. Modern smartphone software already does it. Your application will request access to your phonebook and GPS signal... and will refuse to work unless you let it. And the majority of people who don't already reflexively click "yes" "I'm ok with it" on every popup will crack and allow all the deviance if the application's payoff seems juicy.

    I mean, imagine that your Facebook app REQUIRES geolocation, or does not work. But you can't say you won't use Facebook: 90% of your child's activities become completely opaque if you don't monitor Facebook. And woe if you order him not to use Facebook; after 3 days of peer pressure at school, he/she will have a fully private and hidden profile. So, you WILL allow Facebook to track you in real time...


    The average age of cars in the U.S. will likely increase as the economy bleeds out. The apps could show up on a forked OS and app store, call it a "Chosen phone". True believers will be strongly encouraged to not be seen with ungodly phones.


    Yeah, the "Roman Republic" was an oligarchy. The "res publica" wasn't everyone, it wasn't even every non-slave. Rome was owned by a few families, and everyone else lived at their sufferance...not legally, but in practice.

    A better example would be Athens or Sparta, neither of which could be considered despotisms. Sparta was ruled by laws that were essentially unchanging and everyone was required to know perfectly. They were sung at feasts. To avoid despotism, they had two kings. Athens, after a lot of hoo-raw was ruled by the wealthy male citizens...but it wasn't an oligarchy, as newly rich were allowed to join. (They were, of course, looked down on, but they could even rise to the top. Check out Themistocles. Of course, their enemies were likely to pull them down.)

    But the thing is, neither of those was expansionist in the manner of a corporation. Even Alexander, while he tried to conquer the world, didn't try to rule it in a unified way. Rome came closer to that model, but didn't want to dilute the control of the they went to a Federati system, the (loose) basis of the US Federal government.


    It doesn't appear to be reasonable to claim that Neanderthals, etc. and Homo Sapiens were the same species, even though they weren't totally reproductively isolated. The evidence seems to show that interbreeding was usually not successful. In particular, the mitochondrial evidence seems to show that the mothers of current humans were Homo sapiens sapiens. One possible reason for this is that the shape of the babies head may have been fatal to the mothers of other crossbreeds. Of course, given the rarity of the crossbreeding being successful it could just be that the all-female-ancestors line died out. IOW, the data could be pure chance.

    But every genetic study seems to show that successful crossbreeding was rare. And the groups were not geographically isolated, so it's reasonable to deny that they were the same species.


    That sort of thing was a large part of my point. Now compare genus Felis, where F. catus and F. sylvestris interbreed readily, or genus Cervus, where C. elaphus and C. nippon do. The choice of what constitutes a species is very political. Look on the bright side - it's worse in botany :-(


    Administrative note: The discussion of evolutionary biology is derailing and should be dropped until after comment #200.

    Further comments on this topic will be deleted.


    Yeah, you are completely right: the cellular network knows the location of each mobile phone. I didn't take that into account, because firstly it's kind of hard to get if you're not the operator, and secondly at least in many places that information is strictly controlled. It's not available to a random website which the phone is accessing.

    Also yes, the applications often want more comprehensive access to the phone than strictly necessary for the application to work. There are many reasons for this, and it's not always because the application maker is somehow shady and wants to do something you wouldn't want. Sometimes the interface for handling the rights of the program is just annoying enough that the programmer just requests everything and forgets about it.


    If you could fork an app store that easily, why hasn't it happened already?


    To our OGH: Sorry for the derail, I've been reading too many official documents where scientific names are misspelled and snotty bureaucrats insist their brand of arrogant ignorance is correct, because the real science doesn't matter. I'll try to bring this back to our future being broken, but I think the reaction to asking people to spell scientific names right is good evidence that it is

    People who have ancestors of European, Asian, Native American, Oceanic, or Australian ancestry have something like 4% Neanderthal and/or Denisovan DNA. That's good evidence that there was successful inbreeding, especially since it's not the same 4%. IIRC, there's evidence of a higher percent of putative Denisovan DNA in ethnic Tibetans and Melanesians. The former is hypothesized to be part of Himalayan adaptations to living at high elevation, while the latter is just one of those things that might or might not make sense. I also believe that even African populations have evidence of multiple genomes contributing to the African line of H. s. sapiens, and I'm definitely aware of arguments that the definitions of fossil hominids might be overly split, that the combination of genetic and morphological evidence suggests that we were even more morphologically diverse in the past than we are now, but that this did not prevent interbreeding. So no, Neanderthals were not another species, and we can argue endlessly about whether they were subspecies. Subspecies are not reproductively isolated by definition*, and that certainly describes what genome-based evidence shows.

    I'm not clear on the extent to which Neanderthal and Denisovan DNA is functional or not in modern humans. This is a question about whether or not it codes for traits that are advantageous in specific environments, such as cold, high elevation, or increases diversity in the immune system in ways that favor defeat of pathogens.

    *In many cases, many species are not reproductively isolated. There's well over a dozen accepted definitions for species. For example, most definitions of prokaryotic species do not depend on reproductive isolation, because prokaryotes generally don't work that way.


    AFAIK triangulation via cell towers is only available to operators, not apps. The ID of the cell tower you are using is available via the SS7 system if you have the phone number or IMSI.

    Google uses a database of WLAN network ids in addition or instead of GPS - it's apparently more accurate. Most (v4) IP addresses allow to identify the country or even the city from where you access the internet.


    You can simply not have a phone.

    (And you sure as shit don't give your kids a phone, at a minimum for the same reason you don't give them your cheque book (quite apart from any other reasons), only it applies much more strongly in the phone case: misuse of cheques is at least a deliberate and explicit act, whereas with a phone they can spend your money on shit that doesn't even exist as part of some dumb game and not even realise that real money is involved at all, until the bill arrives and you yell at them (by which time the actual transgression is too far in the past for the conditioning to take properly).)

    Cue the inevitable "But you can't do that because..." - Stop right there. I don't care what comes after "because", I deny it. Not so long ago the things didn't even exist to be had - not so long ago everybody did "do that" because "doing that" was the only possibility. That other possibilites now do exist does not impose a similar lack of choice on deciding to reject them. Moreover, pretty well every point used to attempt to argue otherwise boils down to "but all these other people do it", which a lifetime of experience tells me is more often a reason not to do something than otherwise, and the more people obsess over it and the faster they rush into it the more reason there is not to do it. You just have to learn to say "no".

    ("Ignoring peer pressure" really needs to become a matter of basic education right from the very start of school.)

    This is what gets me about pieces like this: they always get to a certain point and then skate off sideways and fail to address the fundamental aspect, that the undesirable consequences are only even possible in the first place because people have suddenly developed this mass obsession - in large part because they are completely ignorant of the very problems in question; the obvious response is to educate them against the habit, and the material presented is excellent for the purpose, but such an aim is not only never attempted, but never even hinted at as being a potential useful response by others.

    Heroin is more addictive than mobile phones, more useful, and less damaging both to individuals and to society (practically all the "problems" associated with heroin are created ex nihilo, or else massively exacerbated, by its illegality, and are not an inherent property of heroin itself). Enforcement does not reduce heroin use, but education does, even when the premises it's based on are so flaky.

    The mobile phone epidemic needs to be addressed with fervour comparable to the efforts directed against heroin, but with an approach modified to suit the different characteristics: education can be backed by a large corpus of established fact (as opposed to irrational moralistic prejudice against people choosing specific methods of making themselves happy), and enforcement is sufficiently straightforward as to be trivial (by simply not granting transmission licences, hiding a national infrastructure of radio transmitters being ridiculously impossible), requiring minimal resources (they can go into education instead) and avoiding the mass infliction of personal misery inherent in anti-heroin enforcement.

    Only this won't happen, for a reason even more fundamental to the same problem: capitalism.


    Charlie: the brain-in-a-box link is borked.

    I would disagree with the assumption of the primacy of the "overt purpose" of corporations over the purpose of making money. I would put it the other way round: making money is not merely a "life-support" function as vital as breathing, it is the prime purpose. The "overt purpose" is basically an excuse to implement a money-making organisation. It's not the paperclip; the money is the paperclip. Making money is why corporations get set up in the first place. It's also why the things they produce are crap and longevity of any product is so hard an attribute to find: the overt purpose is compromised in the interests of making more money by selling more things, and strategies like concentrating a product's crapness into a tendency to only last six months, or an absence of functionality whose importance may not be immediately apparent but which can be "remedied" by buying another related product, are effective ways of getting away with it.


    The phone itself can triangulate off towers; the necessary information exists in the front end because of the phone's own involvement with switching towers, and it can be disambiguated using things like GPS hints and the same kind of technique as Google's WLAN trick (using an externally-supplied database).

    How straightforward it is to write code to perform this function on current implementations of mobile technology I don't know, but the nature of the technology itself does ensure that it is possible.


    I suspect switching towers is handled by the phone's baseband processor, not the application processor. That information should be at least as well protected from apps as the GPS location data. If we talk about hacking baseband processors, we open a whole new can of worms...


    "I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch"

    It is certainly comforting to believe that your scary enemy is too thick to be as scary as they might like to be, but it is also misleading, since moral and technical intelligence are not rigidly correlated. No matter how vile the ideology it will still include its own complement of disturbingly talented people.


    The lead-damaged generation in the US is 45-65 years old right now.

    This is also the age range of maximum political power, similarly the upper management of most organizations is in that range.

    The symptoms are thought to be: aggression, impulsiveness, lower IQ.

    I'm hoping things get a bit better over the next decade.


    Couple of typos, if OGH is interested:
    - World Health Organization should be capitalised.
    - "usless".
    - either "because" or "when", not both. I think.


    Many thanks for the essay. I'm impressed, and frightened, and convinced.


    Jaron Lanier discussed some of the problems created by AI when he was on Tavis Smiley. He pointed out that the Nazis in Charlottesville were an offshoot of Gamer-gate and Black lives matter. That a year from now there will be a backlash to "Hashtag Me Too."

    • Basically, the AI used to monetize the sites facilitate the creation of new negative groups in backlash to the good movements.

    His discussion starts in part two. There are transcripts of the shows at each link. Harvest them before they disappear. I've pulled some quotes, just in case.

    Computer Scientist and Author Jaron Lanier, Part 2

    [quote] Lanier: Yeah. So I was describing this process whereby people do something very positive, very pure-hearted in the current online world. Black Lives Matter is an example we were using, but I could also talk about the Arab Spring, I could talk about a lot of other examples. And I have a feeling this is gonna happen with me too that’s going on right now. So…

    Tavis: The hashtag Me Too about the women, yeah.

    Lanier: Yeah.

    Tavis; Sure, sure, okay.

    Lanier: So what happens is all these people get together. What they do is beautiful. They create literature, they create beautiful communities. It’s moving. It makes you cry. It’s incredible. It opens eyes. It opens hearts, right?

    But the thing is, behind the scenes, there’s this completely other thing going on, which is this data that’s coming in from all these people is the fuel for the engine that runs what’s called the advertising business, but I prefer to call it the behavior modification business.

    So it has to be turned into something that will generate engagement, not just for those people, but for everybody. Because you want to maximize the use of your fuel. You want it to be as powerful and as efficient as possible.

    And, unfortunately, if you want to maximize engagement, the negative emotions are more efficient uses of that fuel than the positive ones. So fear, anger, annoyance, all of these things, irritation, these things are much easier to generate engagement with.

    So all that good energy from Black Lives Matter or other movements is repackaged and rerouted not by an evil genius, but just kind of automatically by algorithms to maximally coalesce a counter group that will find each other that might not have found each other otherwise that will be irritated and agitated by it.

    And because the negative emotions are more powerful for this kind of scheme, that counterreaction will typically be more powerful than the initial good movement.

    And that’s why you have this extraordinary phenomenon of Black Lives Matter and then, the next year, you have this rise of white supremacists and neo-Nazis and this horrible thing which we really hadn’t expected. Nobody had seen that. It’s like this algorithmic process that I think is kind of reliable and we must shut that down.

    . . .

    Lanier: No, no. Look, this gets back to something I said in our previous encounter, which is there was this beautiful project from the left to make everything free, but at the same time, to want commerce because we love our commerce here. It’s like our Steve Jobs, right? So we said to make it all free, free email, free everything, but you still have to make money.

    So the only option is advertising, but in this very high-tech situation where we have this constant measurement and feedback loop with this device that we have with us all the time. It’s no longer advertising. It turns into behavior modification. So essentially, I think this was not an evil scheme.

    Probably the people in Silicon Valley would have been perfectly happy to come up with something like Facebook that was a subscription model where you could also earn a royalty for being successful as a poster on Facebook or something. And I think that alternate universe would have had its own problems, but it wouldn’t have had this problem.

    This idea that the only business model available is behavior modification for pay by mysterious third parties, so you don’t even know who’s hypnotizing you, that didn’t have to happen, and that is the problem. And that was actually a mistake made by the left and was kind of imposed on the businesses. I was there. I think that that’s actually an accurate description. [/quote]

    Virtual Reality Pioneer Jaron Lanier, Part 1

    The irony is, that Tavis was pulled off the air weeks later for alleged sexual misconduct.


    "You can't do this because If you do, Child Protective Services will take your children away and remand them to the foster care system."

    Sound far-fetched? All it takes is a perception that a young person not having a phone is unreachable (or untrackable) and therefore at risk of a terrible fate. Remember the fuss about "free-range kids" a few years back, and the hate levied upon the woman who wrote about allowing her 11-year-old to ride the subway unsupervised?


    Heroin is more addictive than mobile phones, more useful, and less damaging both to individuals and to society (practically all the "problems" associated with heroin are created ex nihilo, or else massively exacerbated, by its illegality, and are not an inherent property of heroin itself). Enforcement does not reduce heroin use, but education does, even when the premises it's based on are so flaky.

    Um, we can look at numbers for that, sort of. According to one unverified source (sorry, too busy to do a proper search), in 2012 and 2013, there were a bit more than 3000 deaths/year from distracted driving, of which some (presumably large) proportion were due to cell phone use. In 2012 and 2013, there over 40,000 drug overdose deaths in the US, and that's now jumped to over 60,000 (source). Just to make everyone miserable and start another pointless derail (as in don't bother yet), US gun fatalities from all sources was over 30,000 (source).

    While I don't think any of these figures is necessarily definitive, the suggestion is that drugs and guns are roughly ten times more lethal in the US than are cell phones. Presumably that is different in other countries, but as noted above, I'm racing a deadline, so if other people want to continue the derail, they'll have to provide data from elsewhere.

    Anyway, it's good to see people seriously talking about addiction as a basic enterprise in capitalism. Drugs (including alcohol and arguably sugar and caffeine) have been part of capitalism since its founding, along with unfree labor and weapons. That's one part that gets left out of the story above, I think.


    I used to ride around the London Underground late at night on my own when I was 11, taking deliberately circuitous routes because they were more fun; my response to that kind of nesh paranoia is short and not particularly sweet.

    We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

    Yeah. More and more I feel like "we'll support everything with ads!" is the web's Original Sin. I was delighted when Patreon made it economically feasible to publish my weird-ass comics online with no ads, and horrified when that lifeline was endangered by their fumbling attempts to make it work for people who were using it in ways it wasn't design to function.

    I've been thinking about this a lot, lately, now that I've left Twitter for Mastodon. It feels so strange to be on a social media site that's got no advertising, that's run as a hobby because people want to have a place for a community to gather and chat. And yet it feels super familiar; it's a flashback to the days of dial-up BBSs being run out of someone's pocket as a hobby, maybe with some percentage of the users kicking in enough money to cover costs. The weirdest part right now is that I can be caught up in an hour or so, and then I have all this time that I could go do something else with, and a habit of staying there looking for new bits of microcontent to keep me from ever having to remember what it was like to have an attention span.

    (well maybe the weirdest part is that I run a tiny corner of that social network omfg, that's pretty frightening sometimes, also pretty amazing)

    I also keep on thinking about how history is cyclical, too, and how (as you note) we have had pretty much everyone who survived the Great Depression die. That's a shoe I'm really afraid of finally dropping, enough of my net worth is in index funds for that to be pretty scary. (Better go put some of it cryptocurrency! :) )

    And, if you haven't heard of them yet, I also want to direct your attention to the idea of the "public benefit corporation" as one way people are trying to embed human need into the loop of these paperclip maximizers; I became aware of the concept when Kickstarter turned themselves into one. It's a way to legally bind the company to consider more than just blindly increasing shareholder value at the cost of anything and everything else - or, to use SF analogies, it's a weak version of Asimov's First Law of Robotics.


    I think we have a fair idea what the microbilling alt-history would looks like in the form of freemium games. It's still a paperclip maximizer maximizing eyeballs and addictive stimuli.


    I don't think DPY is a very useful or relevant metric; as your distracted-driving citation implies, phones (excepting bizarre and unrepresentative freak cases) are not capable of directly causing death, whereas drugs and guns are (as are cars (and also mains electricity and swimming and stairs and lots of other everyday irrelevant things)), so the only usable figures are from indirect effects which are tenuous, don't mean much, and in any case concentrate too hard on a single very limited aspect to capture any general effect.

    Conversely, while deaths from overdose are distinct, usually unambiguous, and easy to count, they too fail as a metric on the same grounds of excessively narrow focus, and also of the obfuscatory advantage deliberately taken of that aspect to promote the agendas of the agencies that publish them. Many drug overdose deaths are attributable not to the properties of the drug, but to the consequences of enforcement; the user takes what they believe would be a reasonable dose of a drug with the properties they are expecting, but because of the necessity of getting it from an unregulated black market supply chain it turns out not to be the same concentration they thought it was, or even not the same drug. The bald DPY figure does not express this aspect; nor does it even include deaths from disease transmitted by sharing needles because their supply is artificially restricted, let alone express the misery of having the disease before it kills you.

    It also bypasses the point that a number of the people it counts would not have even considered taking opiates of their own accord for fun, but became addicted through putting their trust in a capitalist health care system which corporations use to make money by selling drugs - and having been accustomed to well-defined doses of pure product were even more vulnerable to dubious street purity when the official supply failed them.

    That point is significant because corporate drug pushers, like phone addiction, are a social health problem to which rather few actual deaths can be unambiguously attributed in relation to its overall pervasiveness. Particularly in the area of mental health, where the indefinite and subjective nature of symptoms so facilitates large-scale bullshitting, there is massive scammery in the name of getting people to act as money suppliers in return for drugs they don't need, or which end up making them worse, or which don't treat their diagnosis, or which don't treat any diagnosis, or which treat a nonexistent diagnosis that was made up to be something that some random jollop could be sold as a cure for and was then written into the DSM by bent doctors acting under corporate pressure (in what seems to be a system whose internal politics strongly encourage them to bend under corporate pressure), etc, etc... With the kinds of drugs involved this doesn't generally kill people much, at least not in any way you can point to or count numbers of, but it does result in an awful lot of people getting fucked up both by receiving treatment which is wrong and by not receiving treatment which would be right. Moreover, it spreads to other countries, for reasons like the international nature of drug companies and the lack of awareness or acknowledgement.

    Corporate exploitation of addictions to things other than drugs by means which lead to the subversion of national political systems and the increase of Nazism is a social health problem of the same kind. While you can easily count the number of people directly murdered by Nazis, that - in the current state of the modern situation, as opposed to the historical one - does not say anything meaningful about the scale of the overall problem. It doesn't say anything about how close we may be to the point where, if things are permitted to continue to escalate, positive feedback kicks in and mass opinion starts to generally accept people being murdered by Nazis. It doesn't capture the effect, either in amount of misery or in actual suicides, of people's lives being made more unpleasant by increasing racism and hostility. It completely avoids any relation to future increases in the number of old people dying of hypothermia due to lack of heating in another country where the deployment of the same techniques has caused that country to proudly and happily decide to stab its own economy in the guts. Etc, etc...

    This is all the sort of stuff which is at best difficult or impossible to express in quantitative terms, and of which it is often meaningless to even try; there just isn't anything to count. But it is still well established qualitatively; Charlie's written a whole essay about it :)


    What you've described is a society especially vulnerable to systems collapse.

    Something like the Later Bronze age collapse with the mass movement of refugees (modern day version of the "Sea Peoples") being triggered by climate change.


    At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.

    While overall it is a small number there are a growing number of churches who require members to report their sources and amounts of income so they can be pressured into the proper "tithe" amount.


    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone?

    Are the police going to let you do that?


    I mean, your bank doesn't make money off of you using its app, does it?

    Actually it does. In a reverse sort of way. It reduces the need for physical local bank employees and for call center employees. I rarely physically go to a building for any of the 4 banks I deal with on a regular basis. (Checking account kind of basis.) And I call them even less. All because they first developed decent web sites and now they have apps that let me accomplish 99% of my goals. My wife and I wrote a check from one account and got a official bank check the other day only due to a late arriving in the mail credit card for something we needed to spend money on before the end of the year. And if I hadn't waited till the last week to set things up I'd not have had to do even that. And recently I ran into one bank's app deposit limits for the month and had to visit a real branch to deposit a check.

    Apps make the banks money. But by reducing expenses instead of generating more income.


    The average age of cars in the U.S. will likely increase as the economy bleeds out.

    Cars flat out last longer. In the US it started with the Japanese imports having better reliability back in the 70s/80s. Now most new cars are expected to last at least 10 years. I just got rid of a 96 Explorer that was near end of life but another fellow gave me $400 and planed to keep driving it for a few years. My current truck is a 10 year old Toyota that I expect to last at least another 10 years. My 2016 Civic may still be on the road in 2040. (If gas is not the equivalent of $50 / gal.)

    When I started driving in 1970 cars were considered older after 3 years and required a lot of care to go more than 6 to 8 years.


    Most (v4) IP addresses allow to identify the country or even the city from where you access the internet.

    There are free web sites that will typically get you closer than that. And many wifi hotspots are in various registries with their address down to under a few hundred feet. So between those at times you can figure out a location to the block level. And the ISP now sell their data to others so you can really nail it at that point.


    Cue the inevitable "But you can't do that because..." - Stop right there. I don't care what comes after "because", I deny it.

    Sure. If you withdraw from the world as it is today. Or at least as it is in first world countries and cities.

    Go take your family and live on a remote mountain top by yourselves or in a fundamentalist group and you can do it your way. Other than that you can't operate in the reality you desire.


    I also keep on thinking about how history is cyclical, too, and how (as you note) we have had pretty much everyone who survived the Great Depression die. That's a shoe I'm really afraid of finally dropping, enough of my net worth is in index funds for that to be pretty scary.

    I was born in 54 and am likely one of the youngest people who was paying attention to the Nixon Watergate mess. Most of my friends were not. And I think that a lack of memory is a big reason for BREXIT and DT and like things that are going on.


    "(At this point I'm also a little surprised that some big church hasn't decided that surveilling their followers is a wonderful way to ensure a flow of big donations.)"

    It isn't big (unless you believe their hype) but that describes Scientology to a T. "Parishoners" have to undergo "Sec Checks" periodically (especially if they get in trouble), and are encouraged to file "Knowledge Reports" on each other if they witness someone breaking the rules.


    The Bronze Age Collapse gets all the press, I suspect mostly because it's a mystery that acts like a pareidola (connect the dots to make any pattern.

    There are other cases where refugees overwhelmed the structures of stable societies, such as the European Age of Migrations. IIRC, the Spanish colonization of Florida ran into the same problem during a war, and Jordan and Kenya are running into something similar now with refugees from Syria and Sudan now.

    Later this century, I suspect we'll see migrations that put all these to shame, coming out of Bangladesh, Shanghai, and the Mekong River Delta. Florida might be a bit of a side show, really.


    Musk hijacked a paperclip factory bandwagon to get us to Mars (away from this "sacred" cabin-fevered biome) asap and to light a fire under the ass of EV adoption. The AI paranoia is flawed but even that's a good thing because it hits closer to the root of what's wrong with the world (not crony capitalism) than handwaving about gov regulation everything and the kitchen sink.

    The problem is electoral apathy. To how the sausage is made and to rotten sausages (crony capitalism for one, and obviously the present day world-spanning Yellowstone magma chamber of corporate bullshit, incl the giant ever-swelling white zit of a PR department at the White House, or why AI can't just be neglected). Self-perpetuating bad culture. People are too busy dreaming of heptaquilted TP and superbowl weekend and not enough about brass tacks like how that sausage is made and whether their own BBQ needs cleaning too. If "bread and circuses" was a thing thousands of years ago, it won't be less of a thing once both bread and circus are jacked up and plugged in and microtransactionned with modern technology.

    CULTURE is what "broke the future". Same as what brought us this clown of a US pres. Enough people were stupid/ignorant/passive enough to allow/vote for (pay for) what Trump & co peddled then as mere corporate products (and Uber and Weinstein and anything/one enabled by crowdfunding), and likewise now with that same clown peddling his junk dressed up as political merchandise. Technology is a multiplier of human expression and if culture is asleep at the wheel, Vinge's "bad hair day apocalypse" era will creep in with exactly these sorts of "5%" out-from-nowhere anomalies.

    And if the world of the future (yesterday, really) is too complex for Joe "Lambda" Blow to keep up with and keep in check, then AI augmentation (ultimately, but good enough if beginning like today's manifold benign AI assists, later something like The Diamond Age's Primer in various forms for various ages/disciplines/cultures/etc.) is one of the very top priorities as Musk tweets about. Tweeting is good if it lights a fire under public apathy to technological sausage making.
    Anything that increases computational content and speed of public discourse is good.

    IMHO aging is the other top priority. Because people with (at least) twice as much life experience has to make for a more savvy electorate and consumer population. WRT everything from TP to geopolitics (e.g. North Korea or Brexit and how those butterflies really aren't just distant abstractions). It's a needless waste to allow reproductive biology to trump the biology that produces the best parts of humanity: the wisdom that's inarguably proportional to lifespan and the reason we stopped living in caves, repealed slavery, etc. The wisdom that influences AI.

    If people's lives were less made of suckage due to most of them wageslaving for most of their unmodified/involuntary lifespan for the sake of maybe 1 decade of freedom, then society overall should see an uptrend in quality. Longer lives would give people hope and/or something more to look forward to. Even more so because of the exciting times we live in - because of technology and everything it potentially allows us.

    edit- now I've gotten to the end of the video with the Q&A and you purposely wrote it as a wake up call, and I feel less annoyed and more pleasantly disagreeing.


    To make the creepy use cases to be even creepier, there was a story that went around in May that Facebook can help advertisers target people who feel "insecure" or "worthless"; in a word, vulnerable.


    Thank you Points: 1: "public benefit corporation" - presumably in the US? Interesting idea. We have "Not for Profit" companies & corps.

    2: From the above & Charlie's lecture ... Corps as "people" - NOT in the UK, nor much of Europe. Lax though our corporate-control laws are, they are not quite that lax, & the idea of "Economic & Social Responsibility" ( Often shortened to "ESR" ) is gaining traction here, too.

    3: "Mastodon" - do tell? Sounds interesting.


    I scored ILIE, no surprises there really.


    IMHO around the end of the cold war the west fell into an intellectual coma of the 'myth of inevitability'. Once you assume that its all 'inevitable' you don't have to pay attention, and we didn't. I think it was Leonidas Donskis who called this belief in inevitability 'liquid evil' and I'd agree. Unfortunately when people wake up and realise that there's precious little inevitability they don't automatically embrace that they have a role to play in history, see BREXIT, MAGA et al and the rise of nationalist populism.


    In other words there's always some smart arse with a hankering to build a V2...


    The situation / the description of the future sounds very dramatically. A dramatic situation asks for a dramatic solution. So, you've read some of the essays / texts Theodore Kaczynski wrote? What do you think about his (kind of) opposition against (modern) technology?

    Is there a chance that one could win against global (AI) / corporations with relatively "soft" actions or will there be an unavoidable "bloody" revolution, in the end?

    But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

    Lufthansa, under investigation for fare increases on inner-German air routes after competitor Air Berlin went bust, recently tried to argue that they didn't raise prices - "The pricing AI did it!"

    The Federal Cartel office doesn't seem to buy that, though - Reuters article, hope the link works.


    Hm. In the higher authorities the highest positions are often owned by people with a political agenda. These people belong to a political party. The political party will get important through successful elections. Electioneering costs awful lot of money. Where is this money often coming from?

    Yes. From the industry / corporations.

    Here the circle is closing: Money for electioneering if the party "remembers" the donor afterwards.

    And so I wouldn't count on reason / rationality of high authorities.

    P.S.: In the past the laws, related to the possible actions allowed for the Federal Cartel office, were designed to transform it into a cosy paper tiger.


    I love the last question (or rambling comment), which cuts to the bone of this, she had to rejoin Facebook because it was a good at what it does. We won't be destroyed by bad tools, but by really really good tools, which are just too good to give up. Like heroin, like cocaine, like land mines, and gasoline, its the really good shit which gets its teeth in and won't let go.

    I'm giving us two more election cycles before a first world nation tries a prohibition level ban on social media and deep profiling. I don't think it will be the USA, I expect the US will be used as the example as why it is necessary.


    "Ignoring peer pressure" really needs to become a matter of basic education right from the very start of school.

    While you're at it, can you please legislate to remove the typical 15 year old's sense of their own immortality, belief in the stupidity of their elders, and tendency to say "watch this!" (where their 18 year old elder's say "hold my beer")?

    I hesitate to use the loaded phrase "basic human nature", but it is pretty clear that we're social animals, we learn from each other and by doing shit (including crazy stupid shit that doesn't kill us), and that legislating to change this is a fool's errand (all we can do is try to stupidity-proof the environment where our young peers are maturing until they're able to evaluate risks sensibly).

    As for giving kids phones, I got my first wrist watch when I was ten. It cost about £3 (or £20-25 in today's money) and it got bashed up but it enabled me to get places on time. Today, a dumb phone costs about £20-25 and needn't be linked to a credit card, it can be topped up using vouchers: in return, it means the parents always know where the kid is, and the kid can always call for help. I fail to see any way in which basic mobile phone functionality in such situations isn't a lifeline (although we can debate the wisdom of giving young kids smartphones and unlimited app top-ups until the cows come home).

    As for your anti-smartphone rant, you're as off-base as someone calling for the abolition of home computers in 1982, but I'll put it down to mild ASD for now.


    Forked app stores? Amazon runs one for their Fire devices, I believe there's others, all it takes is money and will. This usage refers to a fork in the road.


    I'd dispute the bit about 70/80s Japanese cars somewhat, American management "Culture" did make them look better, but in Western Missouri, they're rare, having rusted out faster. I will note that I've only ever bought a new car once, and couldn't afford a Japanese car I could comfortably drive at that time (Long legs.).


    Link works & note the official reply. A "corporatioN" must have what used to be called a "controlling mind" in Europe & Britain - & if you are the boss, you're responsible. ( Supposedly )


    Although there are vast vested interests at play in Brit & Europen elections, the open pouring-out of vast sums, as seen in the USA is illegal over here - fortunately. Yes, of course we have corruption, but it is at least kept under some sort of control - so far.


    In the U.S., the large donors resemble a fourth branch of government, their input has priority over the wishes of the voters. As a whole, we don't seem to be learning all that fast, so we get to be the horrible example.


    Yes, of course we have corruption, but it is at least kept under some sort of control - so far.

    Ya think?

    I recall a journalist covering a Tory party conference a couple of years ago who noted that the delegates there were split evenly into three groups: parliamentarians and their staff, constituency party members, and corporate lobbyists. (Yes, the lobbyists clearly outnumbered the MPs and nearly outnumbered the MPs and their combined staff.)

    Direct election nobbling is illegal, but pushing policy papers at ministers and then offering them cushy jobs when they retire is SOP.


    Later this century, I suspect we'll see migrations that put all these to shame, coming out of Bangladesh, Shanghai, and the Mekong River Delta.

    Shanghai might not be such a problem. There are lots of ghost cities in China, all in the interior, and according to a Beijing professor I knew one reason the government encouraged their construction was that it provides somewhere for people to go when the seas rise.


    paperclip maximizer = replicator

    So we have had replicators since life began. The result has been a flowering of life, particularly of metazoa.

    Ai today and the near future is primarily software, so the nearest replicator analogy is memes. We've had those ever since H. sapiens could communicate. Human brains are great copiers and built to persuade. We expect memes to try to replicate as highly as possible and use up the cognitive capacity. As with genes, the result has been a flowering of ideas.

    As natural replicators have shown, no single gene or embodied as an organism has dominated. There are so many different "paperclips".

    Worrying about AI controling everything is like single cell eukaryotes grumbling about those early metazoa consuming everything and turning the planet into porifera and ctenophores. As evolved metazoa, we see the benefit of that metazoan takeover.

    The various political -isms are just fights about how metazoa should organize. Like Volvox (= commutarian) or ctenophores (=authoritarian). We know the end result, centralized brains won out. That hierarchically human societies have proven most stable should be an early indication of the future.

    None of this helps writing near term, scifi, but it might help with far future scifi.


    In other words there's always some smart arse with a hankering to build a V2...

    Or collect a LOT of smoke detectors.


    Given that there is some evidence to suggest that people in general become more conservative and less flexible as they get older AND in part they gave us Brexit and Trump I would suggest we prioritise mental flexibility and agility over physical aging - presuming of course that the former don’t have a significant physical component that would make anti-aging treatment a silver bullet for all sorts of biases.


    Locating a device by wifi means knowing the location of the wifi access point device.

    Google etc have built a databases of wifi apn locations (maybe by recording gps locations reported by those devices that attach to the apn that do report?). It's probably roughly as accurate as phone GPS.


    What really disappointing me about the internet of reality is how ignorant of simple facts most people remain even after it became so much easier to inform yourself.

    Yesterday there was an editorial in the Seattle times where the editors wrung their hands about city spending. You see City spending is up 39% over the past 5 years. They compared that to the 11% population increase (yes, astounding growth, Amazon). They failed to compare it to the 32% metro-region GDP growth over that time, probably a bit higher for the city itself (such data is collected for metro-regions, not cities, so we don't have a great handle on the GDP of each city and estimates are much less often reported).

    This is the editorial board of a significant newspaper. These people are supposed to know how to find facts. Now, sure, its the editorial board of a family owned newspaper, which means the boss inherited his position, but he probably isn't stupid, and surely somebody at the meeting should know that the default assumption for no-policy-change government spending is that it should track GDP growth to a first approximation, and thus the growth is very much what one would expect given GDP growth.

    There is no reason to have a personal discussion absent trivial facts anymore, yet Newspaper editorial boards are still writing without bothering to look them up. It's no longer a two hour trip to the library to try to figure out how Seattle's economy has grown over the past 5 years. Its a minute or two with the computer in your pocket.

    This failure was deeply set before Facebook even existed. My father used to occasionally send me right wing chain-emails he got from friends and I would, every single time, gut them with simple facts found in 15 minutes on the same machine he used to send me the baloney.

    Facebook, etc, take this failure further. Provides a pro-active stream of baloney specifically tailored to be what you want, powerful tools for refining that stream, building a community around your favored baloney.

    But fundamentally it works because so few are willing to spend even one minute to look for facts.


    A hopeful note: eventually my father rejected this right wing baloney. I think he was saved mostly by a personal interest in science, which exposed a lot of the nonsense he was passed.


    The geolocation stuff is terrifying, however can't you just turn off the broadcasting of your location from your phone?

    Are the police going to let you do that?

    No. In the US, at least, cell phones are required to track GPS location. They aren't required to let you know they are doing so, or to let you see or use the information, but the police can, and therefore others can also. How hard it is to get that information if you aren't paying for it to be available, I don't know.


    "As for your anti-smartphone rant, you're as off-base as someone calling for the abolition of home computers in 1982, but I'll put it down to mild ASD for now."

    Curiously, I received exactly that sort of response when I was saying similar things about the burgeoning private car fetish in the 1970s, and when saying similar things about the cheapness of consumer goods in the 1980s. I side with pigeon that the consequences of the way smartphones are constructed and used are becoming unacceptably socially harmful - but please note that I mean exactly what I said.


    Charlie, you don't tell lies for a living.

    Long ago, in a galaxy far away, aka the late '70s, I was a library page, and another page was a black woman around my age. One day she asked me what I was always reading. I told her mostly science fiction... and her response was, "Fiction? That's like lies, right?"

    I was so shocked it literally took me three days to come up with an answer for her, and I've liked it ever since: no, fiction is not like lies. Lies are where you represent something to be true, when you know it's false. Fiction, though it may tell truths, represents itself to be false.


    What concerns me about Smartphones and similar electronics is the effect they have on Parent Child interaction. i am sure we have all seen mothers with small children who ignore the child in favour of the phone even when the child is crying, running away etc.


    There is no reason to have a personal discussion absent trivial facts anymore, yet Newspaper editorial boards are still writing without bothering to look them up.

    Depends on whether you are trying to present information or convince someone. My local paper has a writer who is consistently anti-teacher*. He never misses a chance to point out the "massive" increases in teacher salaries over the last 15 years, compared to the "minuscule" annual increase the average Ontarian got over that time period.

    A bit of simple math shows that teacher salaries have increased a whopping 0.1% more than the average worker over a 15 year time span, yet most readers can't to compound interest in their heads and just look at the percentage increases he presents and think "those bastards got much more than I did".

    It isn't that he can't do the math. He demonstrably can (in other articles). And I've sent him several (acknowledged) letters pointing this out. (Also pointing out that he is conveniently ignoring the preceding decade of wage freezes and pay cuts for teachers, which if included shows that a teacher's salary has actually slipped compared to the average worker.)

    I suspect that something similar is happening with your editorial board. They know what they believe and are simply ignoring inconvenient facts. Whether this is an inability to comprehend exponential growth or willful ignorance you'll have to decide.

    You might find this a useful resource:

    *Anti-government in general, except when the government is increasing police funding or clamping down on those horrid environmental activists interfering with good businessmen :-/


    Yes, but remember that the rise of the eukaryotes, and especially the ones the included chloroplasts, caused one of the great dyings. There are clear signs that we are in the middle of another "great dying", but there's no guarantee that we will survive it, even if our AIs do. They may not end up requiring the same life support system that we do.


    What concerns me about Smartphones and similar electronics is the effect they have on Parent Child interaction. i am sure we have all seen mothers ...

    What concerns me is their addictive nature. I've seen a person in an electric wheelchair driving diagonally across a major street (30 mph speed limit, 2 lanes each direction) while texting. Admittedly that's an extreme case, but the point is, it's not just Parent-Child interactions.


    I was so shocked it literally took me three days to come up with an answer for her, and I've liked it ever since: no, fiction is not like lies. Lies are where you represent something to be true, when you know it's false. Fiction, though it may tell truths, represents itself to be false.

    I think fiction is still lies - but honest lies in that it honestly tells that these are entertaining lies. Other stuff which doesn't tell that is dishonest lies.


    Re: 'Technology is a multiplier of human expression'



    There are lots of ghost cities in China, all in the interior, and according to a Beijing professor I knew one reason the government encouraged their construction was that it provides somewhere for people to go when the seas rise.

    I can't see it. Buildings, roads, infrastructure all take money and effort to keep them from disintegrating even if they're not in use. The "ghost cities" won't be needed for habitation and use for decades if not centuries if they were built to cope with sea rise and global warming. it's easier and cheaper to build twons inland nearer the time they're required when they will be a better fit for people's needs.

    My guess is that they were built to absorb excess money sloshing around the Chinese economy, and it seemed like a good idea at the time (Keynes, Milton see...)

    China's been trying a lot of things, some of which work (their high-speed rail network, for example) and some of which don't (their road-straddling bus). Building new towns to absorb their population rise and drift towards urban living from the countryside seems like a good idea but in reality folks go where the towns already are, employing people and providing infrastructure while those towns grow outwards and upwards to cope with the new arrivals.


    The initial seeding was done by the Street View cars I believe, updating may well be done when location tracking can use multiple sources.


    Charlie --

    Few weeks ago I listened to an interview with Jaron Lanier. He said almost word for word what you just did, especially the part about "fundamentally flawed, terrible design decision back in 1995... monetizing eyeballs via advertising revenue." When I read that, I actually thought you had cribbed from Lanier.

    One difference is that unlike you, Lanier was personally involved in making that design decision, and now regrets it terribly.


    Re: '... the consequences of the way smartphones are constructed and used are becoming unacceptably socially harmful'

    Think the key word here is 'consequences', and the missing concept is 'limits'. IMO, the biggest barrier to recognizing and evaluating consequences is the persistence of belief that only physical injury is 'real'. (Emotional and cognitive injury are not 'real'.) This may change once the WHO adds “gaming disorder” to its list of mental health conditions in its 11th International Classification of Diseases guidelines (2018). The tie-in to mobile phones is that mobile phones are how a large segment of the population access/play video games. I understand that this is only one of many harmful ways that mobile phones can be misused. However, once this connection becomes established, it will probably become easier to develop 'safety limits' and other guidelines re: mobile phone usage.


    Re: 'China ghost towns'

    Depending on source, this is already changing: more businesses and gov't departments have move into these cheap/affordable areas which btw have good transportation links to major cities.

    And here's a list of US 'ghost towns' including some prime beachfront areas:


    The initial seeding was done by the Street View cars I believe, updating may well be done when location tracking can use multiple sources.

    Time Warner[1] had a policy of offering any business account free public WiFi. Well free for anyone with a Time Warner internet account. Restaurants and other public facing businesses ate it up. And to be honest you had to expend effort to NOT have it.

    But that meant they had access to millions of computers as to location within a 100 feet or so when using these access points. You know they were selling it to others for correlation.

    [1]Now that Time Warner and Charter have merged into Spectrum I think this policy has become standard for Spectrum but there are a lot of ongoing non alighments for those of use with both old Charter and TWC accounts depending or where your account is located. [eye roll]


    Yes I do think OF COURSE we have corruption & it's bad & should be stamped on - just that it isn't (yet) as bad as that in the USA - though that may change unless something is done about it... ( OK ?? )


    This is the editorial board of a significant newspaper. These people are supposed to know how to find facts.

    To some degree I think it is wired into the DNA of how print reporters were trained.

    Things like: water usage is up 50% in city A but only up by 50,000 gallons per day in city B

    Just what is any sane person supposed to do with that statement? Especially when there isn't enough data in the article to correlate gallons used per day in both cities.


    I am reasonably used to believing that corporations are organisms. IIUC, Charlie's thesis, loosely, is that some social media organisms are becoming pathogens. Can we then expect a transition from pathogen to parasite to symbiote, at least for some of them?


    Addictive smartphones, etc .... TRY THIS RAIB report of terminally stupid teenager who walked into the path of a train with her earphones & jingle-jangle at full volume ... Apparently the on-train camera showed her finally looking up about 0.6 second before she was converted to red jam.

    Darwin awartds, here we come!

    BTW, I liked one except from previous near misses at said crossing: Quite


    Yes there's a lot of hand waving post facto explanations out there for all that excess capacity but the real reason for all the ghost cities is actually pretty straight forward. Fundamentally China needs to keep it's growth rate at 8%, to ensure that the standard of living and employment growth rates keep pace with it's increasing population. If it doesn't and growth starts to drop below (roughly) 6% then there'll be serious trouble, along the lines of rioting in the streets. So to ensure that the growth rate continues (and to avoid the consequent job losses and civil unrest at all costs) the central planners have been using a stimulus and credit expansion to meet the short-term growth targets. Of course it's clearly impossible for this rate to continue indefinitely (unless you've got a second earth somewhere) so sooner or later there's going to be a crunch time, and when it does it will be exacerbated by the debt burden they've built up. Unfortunately the west treats the Chinese economy as some sort of a Magic Dumpling from which you can forever keep taking slices, but the reality is that this particular dumpling is really made up of toxic debt, overcapacity, zombie enterprises and capital misallocation. A business colleague of mine calls China's growth rate policy 'sleep walking into a threshing machine'.


    There's also Kilamba in Angola, built by the China International Trust and Investment Corporation.

    5,000 hectares 18 miles outside the capital Luanda.

    750 8-storey towers where hardly anyone wants to live...


    There's also Kilamba in Angola, built by the China International Trust and Investment Corporation.

    5,000 hectares 18 miles outside the capital Luanda.

    750 8-storey towers where hardly anyone wants to live...


    That's an obsolete view. Go read Thompson's Relentless Evolution for a more useful take (the mosaic theory of coevolution). It's a way of describing how relationships among organisms vary through space and time.

    Anyway, corporations are not organisms. They're pieces of paper, used as a legal tool. There was a long tradition of assuming that plants were parts of superorganisms called plant communities, while animals were parts of superorganismal animal communities, and so forth. The idea was that rather than evolving or growing, the superorganism tried to come into equilibrium with the environment, which was assumed to be constant. When the superorganism was disturbed, it underwent succession, as various species did their thing, making the superorganism's habitat more suitable for the next group (more late successional species), until the climax species arrived, and they were in equilibrium with the environment.

    Problem is, this explanation is wrong. When someone figured out how to objectively test this back in the 1950s (as The Vegetation Of Wisconsin), it turned out there was absolutely no evidence for the existence of superorganismal plant communities. All the evidence pointed unambiguously to plants each growing best in the parts of the landscape where they could outcompete the other plants that arrived there. To an uncritical eye it looked superorganismic, because the same few dominant plants tended to win out over time (think oaks, pines, etc.), but there was no evidence of an organized process. Oh, and this turns out to be really important, the climate changes at all scales, even without anthropogenic climate change. A lot of old trees are currently around most likely because of the Little Ice Age, and a lot of tree seedlings are now dying because the place where their parents grew up is too hot and/or dry for them to survive.

    I hate to say it, but same is true for corporations. As structures to organize human activity, they're pretty good, but when you call them AIs, you ignore all the people who specialize in organizing knowledge specialists into working parties to get goals done, and you especially ignore what happens when those goals conflict with the stated goals on the pieces of paper that describe what the corporation is supposed to do. It's a lovely metaphor, but just as the notion of plant communities screws up conservation work in the face of climate change, you've got to be careful that your discussion of AI corporations doesn't mislead you into inferring behaviors that the corporations don't and won't show.


    That list is it!? It's my fault I didn't try and find a list of "ghost cities" earlier when I first heard about it. From the looks of that list, it looks like China has fewer ghost towns than Spain did during the Euro Crisis. Does Spain still have those abandoned projects littering the seaside?

    Should the New South China mall even be considered on that list? This may not be a problem in the UK, but the US has hundreds of dead malls

    It's interested that China isn't even in the top 5 for GLA per capita (it's the first table in the Atlantic article).

    Probably someone built that mall just as China was transitioning to online shopping and got caught unaware

    I've heard that China has tended to build new cities ahead of time. The fact that they misjudged on so few districts speaks well for their management. I agree that few countries actually build a city ahead of time for VERY good reason.

    Looking around, I ran across these articles

    It identified more than 50 ghost cities in 2015, but it uses a definition of "half the minimum population density an urban area is expected to have". I don't know enough about China to know if that's a reliable metric? Also, is their resolution too small?


    I have an amorphous memory of having read the same thing somewhere (possibly somewhere as authoritative as a blog comment), but on 2nd thought, if you were Cambridge Analytica wouldn't it be in your best interest to seed that line into the minds of politically engaged liberals? And if you were them you'd have all the data you needed to know just where to start.


    On the topic of mobile phones, geographical positioning and mass surveillance, I think there is two points that I would like to add to this discussion.

    First, today's mobile networks actively depend on knowing their user's approximate spatial position - otherwise they would not know through which cells to route calls to. This is true for 2G technologies up to 5G and I believe the state of the art in positioning a couple of years ago is called AECID. In a nutshell, it is calculating a phone's position by looking at the signal strength "fingerprint" (i.e. which nearby cells have what quality signal) and reference it back to their internal cell network planning map, which has all these coverages and levels neatly pre-calculated. In dense cities there are a lot of cells with smaller coverage (also think how reception is done in subways), thus allowing for more accurate positioning without the need for any permissions from the phone's operator. Admittedly, this will require the cooperation of the baseband model, I believe. However, I assume this positioning technology is or will be silently baked in to newer phones and having it could be a prerequisite of getting access to latest generation cell networks.

    So, now the movements of all users of a cell network can be conveniently stored and mined from one or several central locations. I think it's pretty safe to assume that enterprising law enforcement agencies have already hooked their stuff up to this system and store this information.

    Second, to me, the Equifax breach and all the other fun news from Silly Con Valley last year demonstrated pretty well that this sort of personal data will probably be stored not very securely and will eventually be sold off for fun & profit. If it is not already.

    Personally I am hoping that the new EU directive on data protection will force some sort of reckoning in this area. The US on the other hand has decided to take a step back and end Net Neutrality, which will probably make it much much much easier for bigger companies to rig up some bad faith scheme in order to mine their customer's data and sell it to their hearts content, because the alternatives are their equally bad competitors or just to go pound sand.


    By the way, thanks a lot for the talk. I really enjoyed it as it neatly summed together and verbalized a lot of the things I have been thinking about during the last few years.


    The US on the other hand ... That is a short description of one facet of a state headed directly towards a corrupt-corporate state - which is one of the definitions of fascism, isn't it? ( yes/no? )


    China kind of has to build the cities ahead of time. Historical China was built on village industry - A gazillion tiny places where people spent just enough time farming to feed themselves, and the rest of the year on crafts. About twenty million of Chinese people are leaving those places every year to go somewhere where there is actually work to be done. If they did not just plop down cities by fiat, China would have a slum problem out of this world.

    Eyeballing the percentages of rural versus urban population in fully industrialized nations, this massive movement of people will keep going for another fifteen years or so. At which point being invested in the Chinese housing construction business might not be the most well advised idea. But panicking about ghost cities in the nearer term is just mostly fear mongering from people constitutionally unable to comprehend dirigism working at all.


    Coincidentally, I drew this a few days ago while watching tourists in a local café.

    I also drew this:

    Most of the customers are obsessed with their phones, so one ends up drawing a lot of phones and hands. This customer was particularly obsessed; indeed, I'd term her gluttonous. She was cramming crisps into her gob with one hand; and if that one hand couldn't leave her food alone, the other hand couldn't leave her phone alone. She was constantly looking at and prodding it, devouring it with her eyes. Dividing attention between snack and Snapchat, she must have been gaining satisfaction from neither.

    Despite being rough and unfinished, both drawings make a point to me, or perhaps a meta-point. No-one else was drawing. Or writing on paper. Or (as far as I could see) typing anything more sophisticated than a text message. Most of the phone users seemed passive, fingering their screens until prompted by incoming messages.

    I'm too impatient to be like that. I prefer creating. I like to think that this would give me some protection against the addiction-seeking deep-learning horror that OGH describes. Let me borrow the pharmacological notion of receptor antagonist:

    And let me further explain what the green "Antagonist" blobs are with this collage:

    In other words, whatever happened to hobbies? To doing things, other than typing, with one's hands?


    Treason doth never prosper: what's the reason? Why, if it prosper, none dare call it treason.

    The reason that corruption is so low in the UK is that those involved have arranged the rules and brainwashed, er, educated the public that what they do isn't really corruption. Even if central government were held to the same standards that it holds local authorities, charities etc. to, the amount would go up massively (fivefold?). Also note that many of the activities that cause rows in the USA pass almost without comment in the UK.


    It's been about 16-17 years since steered beams started getting deployed, requiring a base station to know not only how far away the phone is, but also what approximate angle it is at. I don't recall where state-of-the-art beam steering was, back then, but I think the beams were about 5° wide.


    In case it's not clear, I'm asking whether there's some way we can give people a kind of mental immune system that would protect them against addictive software and media, including the attention-maximising AI-generated videos. Perhaps one thing we can do is rejig education so as to encourage children into other interests, especially off-screen ones: and to make these more compelling by somehow tying them up with the children's sense of self-worth.


    Contrariwise Look at the giant "Charities" tax scam in the US (!) And I agree that it's worse than it's painted, especially at the local authority level ( See "rotten Boroughs" in the "Eye" ) but fivefold .. naah, not buying it. Doubling, yes, I'd believe that, no probs.


    OF COURSE we have corruption & it's bad & should be stamped on - just that it isn't (yet) as bad as that in the USA

    Disagree: I think corruption in the UK is probably even worse than in the USA. (Hint: look at the Panama Papers and similar. We're the world's dirty corporate money laundry, our press is owned by Russian mafia oligarchs, our public services have all been sold overseas, "our" nuclear deterrent probably can't fly without foreign permission — it's a UK-paid-for extension to the US Navy missile force — we have police spies infiltrating peaceful political groups, we have the use of libel law to chill public discourse (as in Singapore), and so on.

    We're just better at denial.


    I note that as we move towards gigabit WiFi speeds, we are beginning to see home wifi routers with multiple antennae doing beam steering to feed connected devices.

    There's also research on using gigahertz wifi radiation for super-accurate indoor location tracking, even through walls. (And an example of it being used to malign intent in "Dark State", which you can get to read next week.)


    "our" nuclear deterrent probably can't fly without foreign permission

    Ummm, no. The Trident missiles we "rent" from the United States come from a common pool, they're identical to the ones carried on the US boomers and some of the missiles we currently deploy could well have spent time in American tubes previously (and vice-versa). They don't call home before launch, they are fully autonomous and not hackable from outside for very obvious reasons.

    The warheads, penetration aids and re-entry vehicles on top of the Tridents are all home-rolled, nothing above the mating ring is American (I could tell you a funny story about... but I can't). There is some co-operation between the US and British nuclear weapons establishments but it's at the intellectual level, no material or engineering transfers since the 1960s when Britain gave the US some Magnox-derived Pu239 to test.


    Er, no. We don't know what the deal involves, and there is a potential (political) block there. While the official claim is that the missiles and warheads are fully under our control, that's what they would say, wouldn't they?, and there have been some very plausible assertions to the contrary from ex-insiders. And, again, that's all hidden behind the Official Secrets Act, which makes any public claim by people of what 'really' happens a bit suspect.

    But let's move on to plausible technical control. Inter alia, the UK no longer makes that sort of electronics, so we cannot be sure that the chips don't have undocumented features. They have an abort mechanism, so they are necessarily listening to the outside, and I hope that I don't have to explain that is all that is needed. And note that it is enough to abort the missile, so that it's irrelevant if the warhead is not hackable by the USA.


    Actually there isn't an externally-triggered abort mechanism on deployed Trident missiles. There was an explosive charge fitted to flight test vehicles to rupture the motor casing and cause it to stop working properly/disintegrate[1] but that was in case the missile went off course or suffered other problems during test firings. In a full and frank exchange of Buckets of Instant Sunshine there is no backsies if the missile does go off course and you really REALLY don't want a outside actor to be able to override your Go command over the Internets from a hacked PS4 in Guangjyong.

    Once the commander on the sub decides to fire the missiles they have no way of communicating with them in flight since they remain submerged and out of communication with the rest of the world. This is different to air-dropped weapon systems where until pickle release the weapons platform can be recalled by command.

    Sure, in the movies there's a Big Red Abort button that can be pressed in the nick of time when the script requires it but in real life, no.

    [1]Solid-fuel motors are quite difficult to blow up and it's nearly impossible to stop them working at least somewhat. Even during the Challenger disaster when the LH2 tank exploded the two SRBs continued on their merry way with the SRB that caused the tank explosion working at nearly 100% performance.


    They have an abort mechanism, so they are necessarily listening to the outside

    Nope,no nuclear warheads have an in-flight abort mechanism. The abort mechanism is provided to prevent them from flying in the first place — in the case of Trident, presumably permissive action locks. (If they fly, you want them to credibly go 'boom' when they hit the target; an in-flight abort mechanism would render them vulnerable to espionage by the primary target nation. On the other hand, you really don't want them to launch in the first place without authenticated orders.)

    I believe under the 30 year rule it was confirmed that the UK Trident warheads are clones of the US design — of necessity, as they're designed to fly on the same launch vehicle. I'd be extremely unsurprised if, over the past 20 years, the UK warheads didn't actually use American manufactured components, except for the fissile core. (After all a conical RV heat shield, a guidance package, etc are not "nuclear explosives" so could reasonably be considered simply additional parts of the UGM-133 missile system.)


    I'd be extremely unsurprised if, over the past 20 years, the UK warheads didn't actually use American manufactured components,

    No. The US does not export, sell or give away any nuclear-sensitive components to anyone. They may lease or loan hardware -- Canada for a time had American nuclear weapons on loan under Canadian operational control, for example -- but everything above the mating ring[1] on the Trident missile deployed in British subs, from firing circuitry to explosive lenses, re-entry vehicle housings, terminal guidance packages etc. are British made.

    There's a lot of cross-technology sharing, design, equipment testing etc. that goes on between the US and the UK (in both directions) and live-fire missile tests are done using American assets such as the missile tracking ship USNS Howard O. Lorenzen (which recently replaced the USNS Observation Island) since the UK doesn't have such facilities, but the manufacture and maintenance of all parts of the nuclear weapons are all home-based.

    [1]The explosive bolts that hold the weapons bus onto the final stage of the missile are a sore point. The US-made bolts are reputedly not as good as the British-designed ones but the American side of the mating ring is where the bus separation controller that fires the bolts is located so American explosive bolts are used on British-deployed missiles. And I never said that.


    “...everything above the mating ring[1] on the Trident missile deployed in British subs, from firing circuitry to explosive lenses, re-entry vehicle housings, terminal guidance packages etc. are British made.”

    But has to be compatible with everything below the mating ring. Whoever defines and controls that interface has an awful lot of influence over the behaviour of anything which complies with it wherever it’s made and whichever side of the interface it lives, and that’s assuming that there are no undocumented (for overseas customers) features in the interface.

    Also my understanding is that there are systems which prevent the missiles first-stage firing until/unless it’s clear of the water after being ejected from its launch tube by pressurised gas and it doesn’t seem like it would be beyond the capability of US engineering to nobble that in a manner which leaves your expensive firework bobbing (relatively) harmlessly around on the surface just above the submarine attempting to launch it. In fact, there are probably any number of ways that an attempted deployment could be aborted right up until the moment that the payload bus separates from the third stage without involving or requiring the co-operation of anything on the “foreign” side of the mating ring.

    You could argue that this is all a bit far-fetched and paranoid, but, given the historic behaviour of the USA towards supposed allies, and the nature of the devices we’re dealing with I’d be extremely reluctant to completely dismiss the idea...


    Re: China & Ghost Towns

    Okay - see your point re: insanely high (8%) GDP growth.

    Have also been wondering whether China's increased presence in Africa - physically as well as economically - has anything to do with this, i.e., since the Chinese have been more successful than anticipated at scattering around the globe (because China is no longer perceived as an enemy to avoid/keep out), then there is less need to move populations to new (already built) settlements within China.

    Another possibility for creating these towns is the massive land grab/reforestation that started in the late 1990s which required the relocation of millions of residents. [Also Chinese Green Wall]

    Too bad the new Chinese land reform plan is going live this year. Some critics feel this will result in the same type/style of corporate ag as currently exists in the USofA.


    Re: China - not 'ghost' but 'resort' towns

    Interesting - so the over-building provides the growing middle class with a new way to keep up with the Chou's [Joneses].

    Other potential uses for such high density over-building: tertiary education (university towns), seniors (retirement communities).


    Re: ' .... but when you call them AIs, you ignore all the people who specialize in organizing knowledge specialists into working parties to get goals done, ...'

    Sounds pretty much like the way the human brain/nervous system is organized - the conscious/thinking brain part is typically oblivious to what the rest of the brain is doing to keep the body alive. And since it's humans who first designed the AI template, similarities are probably not all that surprising.

    What I'd really like to see is an analysis of the history of the Board of Directors, the corp's prefrontal lobe.


    Oh, strange attractor time again.

    The problem with stopping that first stage firing is that you've got a very short time gap between the missile breaking the surface, and it realising it's done so and igniting the motor. Once that solid state rocket is ignited, it's heading for the sky.

    The ocean itself is a pretty effective radio insulator - you can get signals through, but AIUI you need quite impressively sized aerials at both ends and it's not something you could hide from inspection. So you need to be through the water surface.

    Getting your missile to find and read a signal within a fraction of a second, a thousand miles from the transmitter, would be quite a feat.


    The problem with installing a method of nobbling only British-deployed missiles is that the nobbling system would have to be fitted to all missiles since they're chosen from a common pool, not a special production series (aka "monkey model") just for export to Britain. Britain leases the missiles, pays for ones it fires off in tests and regularly recycles missiles back into the pool and picks up refurbished ones from the store. From Wikipedia -- "The pool is co-mingled and missiles are selected at random for loading on to either nation's submarines". Fitting nobbling hardware increases the risk of someone nobbling American missiles which is something they really want to avoid, a bit like not fitting a Big Red Button abort mechanism so beloved of Hollywood.

    The British Sparkly Bits above the mating ring have a lot in common with the American Sparkly Bits but they're not identical -- as I said before there's a lot of sharing of information about weapons design but both sides have their own implementations. For example it is believed that American Permissive Action Locks are a lot more sophisticated than the British protective systems, and safety interlocks are also different, and it's likely that the British Sparkly Bits are metric... Once the weapons bus separates from the D5's mating ring it's on its own so it doesn't have to interoperate much with the American parts below.


    The ocean itself is a pretty effective radio insulator - you can get signals through, but AIUI you need quite impressively sized aerials at both ends

    AFAIK there's no VLF communications systems operational today, the last ones were shut down some time ago. That begs the question "what replaced VLF?" for submarine operations.

    It was a pain to use especially on the sub side of things. They had to deploy and recover a very long antenna on a preset schedule to pick up VLF messages and they didn't have the capability to answer. The data rate was very low, too.

    My (not very serious) candidate for a VLF replacement is steganographic synthetic whalesong on a one-time pad.

    Sperm whale synthetic voice #14, male: "I Wuv youououou."

    Number One gets the transcription from the sonar officer and checks the codebook -- "Right, it's Wednesday, four repetitions of "ou", that means we move to patrol area Delta in 48 hours time. I'll let the skipper know."


    That's a problem? Look, everything nowadays includes CPUs, and those are arbitrarily programmable - and I don't just mean in software, or even firmware. Yes, OF COURSE, it would be there in all of them. If I were implementing something like this, it would be a non-obvious undocumented feature of the actual hardware logic. While all this sound like tinfoil hat territory, the USA has quite a lot of form in this area.

    I am not claiming that this IS the case, but that the assertions that it isn't don't have any more evidence to back them up than the hypothesis that it is. Nor do the assertions that there is no political requirement for permission. And the UK has a LOT of form in accepting such conditions from the USA, and hiding them from its enemies (i.e. the British public).


    China's increased presence in Africa It's called Coloniolism - but not to worry, it's not being done by eevil pink Europeans, so it's all right (!)

    Too bad the new Chinese land reform plan is going live this year. Some critics feel this will result in the same type/style of corporate ag as currently exists in the USofA. And even that might be an improvement ... There are many areas of China, where the "farmers" have been so "efficient" at controlling the wildlife that vast areas of fruit trees (etc) have to be HAND POLLINATED, because there are no bees at all ....

    Also - as in - what's the strangest wildlife reserve on the planet? Korea's DMZ.


    What happens if someone sells the Russians the missile-nobbling keys? Or the North Koreans? Or the Illuminati? Oops, the entire fleet of American D5s can be switched off at launch, how unfortunate...

    Better not to have any nobbling facility installed in the first place and just, you know, accept that the British independent nuclear deterrent is actually independent. I've seen comments by many people that the Tridents can't be fired by Britain at all, they have dual-key launch controls with an American Naval officer on the other switch, Britain has to get the President's permission to fire etc. Too much Hollywood, I think.


    The problem with installing a method of nobbling only British-deployed missiles is that the nobbling system would have to be fitted to all missiles since they're chosen from a common pool, not a special production series (aka "monkey model") just for export to Britain.

    If it were me, I'd think in terms of nobbling, not the warhead or the submarine (both British) but the astro-inertial guidance system on the UGM-133, which per even wikipedia is able to take updates via GPS. The GPS cluster is under US control, and both the first and second stages burn for 65 seconds; that two minute window after launch might be enough to allow the US to activate some kind of signal using GPS as a carrier that interferes with the missile's guidance and points it at a harmless patch of ocean. As GPS supports encryption (although it's switched off by default these days) there'd be some hope of being able to keep such a kill-signal secret, and relying on the GPS cluster to broadcast it would make interfering with or faking the kill-signal challenging.

    But the usual objection to adding an abort switch to nukes applies: if there's a back door on your nuclear deterrent, then you've got to consider what happens if the bad guys gain access to it.

    Also, in the case of a UK/USA disagreement on when to hit the "launch" button, I'd actually expect the UK to be much more cautious — we're a smaller, much more easily devastated target for retaliation — and also to be vulnerable to political pressure from the US. (As in, a phone call from the White House to the Foreign Office saying, "your PM has gone nuts, yank her choke-chain or we'll yank it for you" would almost certainly get much speedier results than a UK request that the US cabinet invoke the 25th amendment.)


    That begs the question "what replaced VLF?" for submarine operations.

    A year or so ago New Scientist ran a piece about the use of pulsed neutrino emissions generated by a fusion reactor (not a power-producing one, just a plasma containment field able to induce D-T fusion — think JET, or smaller) to communicate with subs. It's a one-way channel, that relies on the submarine diving deep and trailing a string of CCD photodetectors. Given enough neutrinos and enough pitch-black ocean to act as a detector chamber, the theory was that you could send low-frequency signals to submarines right through the Earth.


    GPS is not trusted and can be spoofed so I don't expect ballistic missiles to use it for anything, pretty much. They rely in flight on inertial guidance systems which are much improved from the old days of spinning gyros and strain gauges with a final sanity check using a star finder before the weapons bus separates. Spoofing a star is a lot trickier than messing with GPS signals.

    The interesting thing for submarines is the rise in the number of oceanographic vessels sailing around various places in the world's oceans and mapping the seabed to an resolution of a few centimetres. This gives subs a way of finding out exactly where they are by terrain comparison without having to go anywhere near the surface to get a GPS reading. Moseying around two hundred metres down and scanning the sea bottom with a low-powered sonar is a much better bet.


    Ah, that might explain why there's been increasing interest in the bioluminescence of deep sea organisms. There is quite a lot of it you know, especially in the relatively shallow depths that military submarines normally operate at.

    I can think of four different ways to deal. One is that they've figured out how to make a fractal VLF antenna, and it's sitting somewhere less obvious. Another is that they've figured out how to use some other manmade feature (such as electrical grid lines or oil pipelines) as an antenna (Keystone XL, perhaps?). A third is that the US Navy, at least, has huge arrays of sonar sensors all over the Pacific Ocean and presumably elsewhere. How does data get from them to the US? That seems a reasonable route for piping messages to submarines, especially when their location is nominally known. The fourth possibility is that the boomers actually cruise near the surface and have an antenna at or above the surface most of the time. If they're away from sealanes, the only thing they'd have to disguise is the wake of the antenna.


    Talking of breaking the future - and also "security". What do more informed & expert opinions think of this supposed-or-actual pair of computer hard/software faults?


    Subs and particularly boomers don't like being on the surface or even close to it on the basis "If you can be seen you will be killed". There is a messy way for a sub to get a GPS fix without surfacing which is to deploy a buoy which rises to the surface after a delay, giving the sub an opportunity to move some distance away from it (carefully measured in distance and bearing). The buoy receives GPS data and then broadcasts its location as an acoustic signal which will hopefully only be picked up by the loitering sub and that plus the known offset from the buoy will give the sub a decent "fix". After transmitting for a short period of time the buoy sinks. Bad surface weather conditions and a number of other factors make this a somewhat problematic solution.

    Hopefully no-one who's sub-hunting will spot a buoy like that while it's on or near the surface or when it's transmitting since it means there's a sub nearby which is Too Much Information.


    For example it is believed that American Permissive Action Locks are a lot more sophisticated than the British protective systems,

    That wouldn't be hard, because Britain doesn't have a PAL system - it relies entirely upon the crew of the patrolling SSBN (much as the original Polaris submarines, and all US missiles until the 1980s or so - see Bruce Blair's articles about a PAL combination of "all zeroes").

    The UK's "protective system", is that the whole crew is involved in getting the submarine into position to launch; there are just too many people involved in the firing chain.

    Regarding VLF, this particular Army Reserve signals unit still have pictures of their balloons on their website. Now, ask yourself what kind of radio signal requires an antenna long enough to require a balloon to hold up one end?


    Dude, you broke the alphabet, too!

    The Laundry Alphabet

    A is for Auditor, hauls you over the coals, B is for Bob, apprenticed Eater of Souls.

    C is for Chthonians, who burrow below, D is for Dominique, friends call her “Mo”.

    E is for Equoids, will make maggots gag, F is for Forensics, putting Things in a bag.

    G is for Gods, arousing to strike, H is for Half-tracked motorbike. [1]

    I is for Innsmouth, lair of the Deep Ones, J is for Johnny, who sticks to his guns.

    K is for K-Syndrome, nibbling your brain, L is for Laundry, the department arcane.

    M is for Mhari, psycho ex-girlfriend from Hell, N is for Nazgul, bad bedmates as well.

    O is for OCCULUS, saviours on wheels, P is for Persephone, a hazard on heels.

    Q is for Q… You've no clearance for that! R is for Ramona, says goodbye with a splat.

    S is for Spooky, small cat with a shtick, [2] T is for TEAPOT, James Angleton's nick.

    U is for Universes, born with Big Bangs, V is for Vampires, now called PHANGs.

    W is for Warrant, please look at my card, X is for Xenomorphs, awaiting their part.

    Y is for Pinky, the honour is due, and Z is for Zombies, loving Brains, too!

    [1] Kudos to Chris Suslowicz for first dibs. [2] Imagine a cute_cat.jpeg with the meme “I Haz Thumbs!”

    I hope this cheers up OGH a bit. Albeit he stross neglected alphabetical diversity when picking names for the series. >;-)

    P.S. Megpie71 posted another Laundry Alphabet at the end of the original thread.


    Meltdown is Intel x86-specific, but is pretty terrifying. There are software mitigations that can be done -- the penalty for recent processors is fairly low (the PCID feature), but can be fairly significant for older processors. Intel just announced they're issuing a microcode patch for processors in the past 5 years which, they claim, will fix both problems. We'll see.

    Spectre seems to impact every processor design that uses speculative execution, and that's where the redesign needs to happen. Specifically, either speculative loads need to not impact the cache unless they're finalized, or it needs to undo any cache changes if it's not finalized. (Effectively those are the same thing, but the implementation for both would be significantly different.) Since I still don't fully understand how this attack is going to work, that's about all I've got.


    Re: Chinese colonialism

    Yes - a spin on the US formula: send in your engineers to 'help' build/install the infrastructure you designed and manufactured patented parts for, up-sell pricey 'maintenance' package, insist on providing the infrastructure loans, sit back and collect. Will be interesting to see in 5-10 years what bits fail and why.

    Hand-fertilization: this is happening all over the planet, i.e., the great bee die-off. In one study, China posted a loss of 10-11% of its bees in one year - much better than the 40-50% plus in other parts of the world.


    'A 2015 review examined 170 studies on colony collapse disorder and stressors for bees, including pathogens, agrochemicals, declining biodiversity, climate change and more. The review concluded that "a strong argument can be made that it is the interaction among parasites, pesticides, and diet that lies at the heart of current bee health problems."[59]'


    I'm always amused by the smart-phones-are-making-us-dumber or the smart-phones-making-us-antisocial comments. First off, I've been a generally asocial in public spaces over four decades before smart phones came along. I'm not necessarily anti-social, but I try not to get into conversations with strangers, unless absolutely necessary — because inevitably the people who want to talk are the ones who won't stop talking once you given them an opening. As for hobbies in public spaces, well, a lot of people (like me) think their hobbies are personal, and they don't want a bunch of strangers looking over their shoulders or making comments.

    I do a lot of my work from coffee shops. Many people have trouble believing that I can possibly be working if I'm not at the office! — and no it's not a convenient time to talk right now. But I may have been one of those people poking at the iPhones you've seen.

    My problem with mobile applications is that they're starting to develop a theory of mind, but they won't be able to recognize my asocialness, and they'll be bad as needy talkier that I try to avoid in public spaces.*U36hBj8i-C7JJJxS4MP2HQ.jpeg


    Not this canard again.

    First of all, the quoted figure is around 1 million Chinese people in Africa

    First, it's questionable if that number is even true?

    Let's assume for the sake of argument that it is true.

    From the Wikipedia page, most Overseas Chinese are in South Africa. However, Chinese have been in South Africa since the Dutch controlled Taiwan (in the 1600s). Second, it doesn't take into account the migration at the tail-end of the Qing Dynasty.

    Second, I'm sure the EU has more Chinese expats than all of Africa, despite having maybe 40 percent of the population. No one is talking about Chinese colonialism in Europe.

    Third, 1 million people is a rounding error in a country of 1.38 billion. I don't think that their absence is affecting Chinese urban planning at all.


    I think that China wants to replicate the US/Australian massive agriculture policies. Don't forget that both countries have fewer workers per land area than Europe. In other words, their agriculture is more automated. I could see why that would appeal to the Chinese government.


    "Neonicotinoid" pesticides are being very heavily fingered for this - banned in the EU & even our MinofAg have finally come down on the "ban" side of this one, after much lobbying by the aggro-chemical lobby & much protesting by conservation groups & more importantly, actual experts.


    Doesn't take many, if those few are in charge of the factories, railways & telephone exchanges/radio stations (etc) ..... And even more so if they are into "improving" & thereby controlling the agriculture. See also: Actual numbers of Brits in Imperial India, compared to total number of people in India ....


    30 years ago I toured a Trident submarine. I think somebody I was with asked something about GPS, and one of the officers said that they didn't really use it anymore. He said the inertial navigation system (a box as big as a dishwasher) was good enough that it did not drift too much over their, I think, 4 month cruise.

    Each SSN had two crews which had the boat for 6 months, but the first 2 were training/maintenance/etc and the last 4 the actual strategic patrol where they went down and never came up unless something was badly wrong. I think they were really very isolated during those 4 months. Essentially all this money has been spent to let the SSN disappear, and it isn't going to do anything to make itself visible except in dire circumstance.


    I'm sure there is an old Lemon Jelly song about that, or was it King Raam?


    The good news is a Trident sub is pretty roomy, more like an ordinary navy ship than the very cramped stereotype.

    Maybe this changed some at some point, or I'm remembering it wrong. They do 112 day alternating shifts with the two crews: 35 near port, 77 out.


    That's about what I was going to say. Given the specs I've seen on some DARPA sensor RFPs and the error cone estimates for some of the newer space probe missions (which depends on how good an IMU you can launch) I'd guess that a modern nuke sub could have an IMU with a handful of meters total error after a year given they've got all the power, mass, and volume you could ask for compared to aerospace platforms.

    What do more informed & expert opinions think of this supposed-or-actual pair of computer hard/software faults?

    Rather than discuss directly how the Meltdown and Spectre faults will affect you and so on, I think it's more fun to step back and think about them in terms of what they're actually about.

    In both cases, basically the issue is about a feature in essentially all modern CPUs called Speculative Execution. In essence, the feature exists because individual instructions take more than one cycle to execute. To maximize performance, the CPU then runs many instructions at a time, where each instruction moves through different execution phases one after another ("pipelining").

    The problem with that is that many instructions depend on the previous instruction. For example, if you have "if X is true then do Y" the CPU will still be thinking about the "if X is true" part when it encounters "do Y." Rather than waiting, modern CPU's generally just make a guess at whether X will be true or not, and then do Y (or not) based on the guess.

    This continues on for a little while until the CPU figures out whether its guess was right. If so, great. If not, the CPU has to undo all the work it did as if those instructions never happened, and then start again down the right path.

    Anyway, both of these faults have to do with problems in the processor's implementation of "never happened." Essentially, the security researchers discovered that these phantom instructions, while they didn't happen in ways that would break normal programs, they did still have side effects relating to the computer's RAM cache that could be measured using very careful timing of commands that followed.

    So let's step back and enjoy: this security catastrophe is happening because a processor executes phantom instructions based on guesses about the future, and whenever the processor guesses wrong these phantom instructions leave a faint trace that's still momentarily detectable if one is sufficiently motivated.

    Why am I somehow reminded of Hawking Radiation? :-)


    With regard to finding submarines, it should not be a surprise that the US, on putting strategic missiles on SSBNs, began to worry about the findability of such SSBNs. Admirably, it instituted a program to study such issues in a somewhat rational way.

    One of the early studies that the program did was PROJECT SACKCLOTH, which looked at what the Soviets might come up with for detecting submarines hiding out in the vasty deep.


    I am reminded of Hex.


    And I used to* claim that sort of thing was evidence against the universe as a simulation. :)


    Rokos basilisk ate the footnote, in which I claim that


    Since we're derailing onto subs a bit, it's worth remembering that, so far as we know, the average depth of the ocean is a shade over 12,000 feet, while the maximum depth for a fighting sub (where it gets crushed) is somewhere under 3,000 feet (probably more like 2,400 feet or less). While subs can and do bounce off the bottom on continental shelves, in the deep ocean, they're in the upper levels of midwater, not on the bottom, where Alvin and her cousins dive. Deep diving submersibles need to be designed very differently than do boomers.

    Maps of the deep ocean aren't very useful if you're far above them, because the only way you can use the map is to go active with your sonar and tell the entire world where you are. That's not so useful for a military sub. This is likely one reason why the deep Indian Ocean wasn't mapped until Flight 370 disappeared. Subs certainly go into that body of water, but the bottom is so far out of their operating depth that having a map of every abyssal canyon and mountain range is useless.

    As for surfacing, my understanding is that subs can get views of what's on the surface from 50 or more feet down. While I understand the desire to avoid aerial surveillance, I suspect there are ways around it.

    Finally, the critical point isn't that subs need to surface to find their location. Rather the critical point is that when BIG ORANGE ONE presses the BIG RED BUTTON on his desk (which, if there were justice, would be a Staples Easy Button), then the codes have to be sent out to all the boomers, wherever they are, that it is their turn to sit on the surface for 30 minutes or so, launching all their missiles, then to dive at max speed and pray they outrun the counterstrike (and I don't think it is assumed that they will, although I could be wrong on that).

    This need to be told when to go kamikaze at any time, 24/7/365, if why they need some way to signal that can reach a sub deep underwater. My bet is on either the VLF 2.0 or something like the sonobuoy network operating in a similar mode. I don't think a neutrino radio makes much sense, simply because all the dark matter experiments would notice that there's some weird source of neutrinos on the surface of the planet, and the signals seem to be modulated somehow.


    Subs are a guaranteed (if Conventional Wisdom is true) second strike capability, they don't fire until most of the other legs of the Triad (ground-launched and air-launched strategic nuclear weapons platforms) have been used up or flattened by counterstrikes. That's what makes a decapitation first strike riskier for an aggressor, the knowledge that out there lurks a retaliatory nuclear delivery platform they can't easily knock out pre-emptively. Because of that they're not on instant-launch alert.

    The Trident-class subs don't surface to fire their missiles and they can empty their tubes at a rate of about two a minute. The British subs carry about ten missiles these days, not a full complement of sixteen so they could be done and dusted in about five minutes, long before any surviving enemy forces could locate them and target them unless they've been tracked and followed up till then. That's why British SSBNs get a minder sub, to keep the other side's subs away from them and to help break contact if they do get located.

    Britain doesn't have a triad or even a dyad any more, not even with tactical weapons. It's notable that the Resolution-class Polaris fleet had one named Revenge and one of the current V-class boats is called Vengeance.


    Ignoring my skepticism about the practicality of neutrino comms, dark matter experiments that notice anomalies have been cautious since the BICEP 2 debacle.

    I would expect survivors would to report tentative results somewhere between between 6 months and 2 years after the first post apocalyptic neutrino physics conference.


    You can simply not have a phone

    I think your argument that people should not use telephones because "people coped just fine without them" is misleading.

    Back when hardly anyone had a telephone things were structured around that: work, social groups, teams, events, etc were set up assuming a different level of communication. The mail came twice a day, classified ads were used far more to communicate, community noticeboards were used far more. People planned events and work around the level of communication that people not have telephones implied.

    It's still quite possible to live without a telephone in your house or at work. Mail, classified ads, community noticeboards, couriers, telegrams - these all still exist.

    But most people assume faster and more convenient communication, so you're likely to find yourself and your family missing out on some things if you live without a telephone, or if you simply go the lesser step that you suggest of forbidding any children under the age of 18 from using the telephone.


    I don’t think the geolocation nightmare scenario is practical unless you assume the phone OS is complicit

    While it’s true the phone knows where you are, the only way for the bad guys to get access to that data in real time is through an app running on the phone

    If you are gay you are hardly going to download the “bash gay people” app and run it so the gay bashing mob can find you

    Facebook may well know where you are but the only thing the gay bashing bad guys can use that to do is target ads at you

    It’s also pretty easy on Android to spoof your location for specific apps, Uber drIvers do it all the time

    If you think this is all happening via facial recognition cameras on the bad guys phones that’s pretty hard to imagine


    Okay, this bears absolutely no actually connection to the main threads, mostly because a fair old chunk of it is stuff I wrote about a week or so back, after listening to Charlie's speech online. So, here goes:

    • One thing which does occur to me is "you've got to be carefully taught" - and maybe one way of reigning in the growth of these systems which are doing such frightening things on a human scale is to basically accept this: any learning system has to be treated, in its early stages, as a child in need of supervision, care, and the application of selection and discretion to what they're exposed to. We wouldn't allow very young children to be wandering the cesspits of the internet unsupervised (and anyone who allowed their child to do so would quite rightly be frowned on by their neighbours, and might be subjected to some rather startling interference from child welfare agencies and similar). Maybe we shouldn't allow very young artificial learning systems to be doing the same thing either. Which means instead of creating whole new legal apparatuses to deal with learning systems and so on, what we can do is declare them legal "children" and start insisting they're "taught" in a way which complies with existing child welfare and child protection laws. Which means, yes, removing them from general contact with the internet, and making sure the content they're put in contact with is filtered and screened (and it may also mean getting in people who are specially trained in raising competent human intelligences - teachers, childcare workers and so on - to consult on the matter - AI "programming" or training may well become a [comparatively low-paid] female-heavy profession).

    • We have to remember that no matter what else an AI would be, it would be a machine. Rather like a coporation is a machine for making money - it's just that some of the components of the machine are human beings. As Charlie pointed out, corporations are the AIs we already have, and they're starting to automate themselves at a frightening rate, removing the chances of a poorly fitting (eg overly moralistic) human cog in the machine botching the mechanism. Corporations are, after all, machines - and machines aren't moral, by design. We don't want the toaster agonising over whether or not it should toast the bread, or the microwave worrying about the comfort level of the steak. So if we want moral machines, we have to build them that way to start with. We didn't with corporations.

    • I think it might be interesting to see how the neurotrapping software both succeeds and fails, and where it does so. Because I get the strong impression there might be a few rather interesting loopholes to the whole thing, and it'll effectively act as a screening system to find people who, for example, aren't neurotypical (and therefore have their priorities literally wired in differently) or who aren't heterosexual males (using the porn example as a filter - pornography is still very heavily masculine-oriented, and very much heterosexual-default as well; while women can and do watch it, by default the manufactured stuff is very much tuned toward masculine preferences). It may well wind up as acting as the "sociopath detector" Charlie was looking for - a way of detecting those people whose personalities are heavily tuned toward solely positive reinforcement as their only regulator.

    Now for the bits inspired by the comment thread here:

    • Never forget that corporations are basically machines, just a variety of machine where some of the components are human beings who have been taught how to function as components of a machine. (This is what our education systems are fundamentally about: teaching humans how to be cogs in corporate machinery, doing seemingly pointless tasks, or fractions of tasks, for unspecified reasons, on an unpredictable schedule, because someone with authority told us to).

    • Ioan @ 32: "The Nazi apps you mentioned would really be built as template apps (I don't think that outright Nazis have yet to demonstrate enough technical talent to build an app from scratch)."

    A name you may want to google: "Weev". Man seems to be technically ept enough, although at present he's having far too much fun and gaining far too much notoriety from his trolling to be bothered with putting effort into doing things for a living. But he's just one example, and it's probably a bit on the highly optimistic side to think he's the only example, particularly when you consider people like James Damore.

    Even if they're "template apps" (which I take to mean there's a certain element of cut-and-paste coding involved), they can still be effective if they're being used as gamified freeware and triggered once they've reached a certain tipping point in popularity. There only needs to be one or two of them becoming effective in order to create enough chaos to make life harder for a lot of people. Now go have a look on the app store (google or apple, I don't care which) and see how many items there are under the heading of "game". Then imagine what might occur if even 1% of that number turned out to be front-ends for a flash-mob-violence system aimed at whoever it is the app developer particularly dislikes (and let's remember: the Nazis had a pretty long hate list, and it started with "people who don't think like us", moved on to "people who don't worship like us", included "people who don't look right", "people who don't fuck like us", "people who act weird", "people who read the wrong books", and added in "people who aren't as healthy as we think they should be" - their list of people they hated was a lot longer than their list of people they liked).

    Now, given I'm an uppity white woman, who doesn't have children and is past prime child-bearing age, and who is also mentally ill, fat, and suspects she's on the autism spectrum, I have a few things to worry about there. Not to mention I have a past history as a bully target, and I know the kind of damage it causes. So I'm sure you'll forgive me for being a bit more concerned about the possibility of app-driven flash mobs carrying out anti-social acts on the part of neo-Nazis in an effort to basically bring about their thousand year reich. Or even just for shits and giggles because they think it's funny.

    • Marshal Kilgore @ 153: Thanks for the signal boost.

    If you are gay you are hardly going to download the “bash gay people” app and run it so the gay bashing mob can find you

    But you might download for example Grindr, which assholes can the use to find out gay people to bash. Or feed the info to some other database.


    Thanks for that. I'm also reminded of the "Collapse of the waveform" which is supposed to happen when a quantised event is "observed".

    I wonder if the "panic-stations" hype from some fo the press is justified, because it appears to be that at least some of/part of these faults is baked-in to the hardware of the processing chips. Aa a complete chip-swap for all affected processors could be ... expensive.


    Or even just for shits and giggles because they think it's funny. You were saying ?


    Can I just stop and point out how fucking disgusting it is for you, a grown old man, to be positively cackling with glee about the horrible death of a child, mainly because she was using a device you don't approve of at the time of her death? Can you please arrange me to be notified of your death so that I may also celibate the passing of someone I have never met and yet despise?


    And how did you manage to completely broad-jump to that totally wrong conclusion I may ask?

    Where am I "cackling with glee"? And who said I "disapprove of" a smartphone? I have one myself, after all. I pointed out that terminal stupidity is, well ... terminal, as was being discussed at the time. I've been told that the train-driver wasn't exactly a happy bunny after the event, as you would not be suprised to hear, & that viewing the on-board train recording for the investigators was ... err .. distressing.


    Greg, by reading this:

    RAIB report of terminally stupid teenager who walked into the path of a train with her earphones & jingle-jangle at full volume ...

    And this:

    Darwin awartds, here we come!

    I suspect that the picture that paints is not the picture you intended, but it is not a far step to imagine gleeful cackling behind those words.


    Noted & you may be right.

    There are times when it's very difficult to hit the right note.

    But, we were talking about the difficulties posed by shall we say "not paying attention" - & was it Heteromeles who said something about trying to convince 15-year-olds that they are not immortal?


    was it Heteromeles who said something about trying to convince 15-year-olds that they are not immortal

    No, it was me.

    Yes, convincing 15-yo's that they're not immortal is a good and necessary thing because it would reduce teenage mortality, assuming it can be done at all.

    I'm not convinced it can be. Supporting evidence: army recruitment in the UK that until recently started at 15, the use of child soldiers in the developing world, every dumb teenage stunt you've ever seen.

    So a good second-best would be teenager-proofing our society. This is more expensive and irritating to the rest of us, but given the sunk costs and enormous lead time (and heart-ache) in starting up a replacement from scratch, I submit that it's worth it. Little stuff counts, like the hole in the cap of every Biro sold since the mid-1960s (prior to which time inhaled pen caps killed hundreds per year in the UK alone) to more obvious things like mandatory driving tests. But counting on our ability to improve attention and cognitive skills? That's a hard task because it involves changing the base parameters of the adolescent human sensory system.


    army recruitment in the UK that until recently started at 15,

    16 IIRC, and with an assumption of at least a year of training before any kind of deployment to a pointy-end situation. I think the youngest British soldier sent to the Falklands, for example, was just over 17 years old and he was in Logistics, not anything likely to be enemy-facing unless something went seriously tits-up.

    Recruitment of 16-year-olds was very uncommon but it did occur, usually from a pool of young people with school or local cadet force experience (my nephew was in the local non-school cadet force and was scouted when he was 16 but he had decided not to make the army his career by that time). It's not quite "child-soldier" in the sense of bandit gangs kidnapping 12-year-olds and using them as front-line fighters.


    most people assume faster and more convenient communication, so you're likely to find yourself and your family missing out on some things if you live without a telephone

    A child's school, for example, requires a phone number for emergencies. I suspect this may be a legal requirement, as the school can't authorize medical treatments. Certainly the school needs a way of contacting the parent to come and pick up a sick child — or tell them which hospital their child was admitted to.


    The current activity at MS /Apple / Linux kernel developers is aimed at implementing a mitigation for Meltdown. Without that, an privileged program could easily read any byte in the computer's RAM. That has special impact for cloud servers. Spectre (and Meltdown after mitigation) require special setups to compromise a computer, but in principle no information in a process is 100% secure from other processes on the same computer. This can only be changed by a redesign of current processor hardware with Spectre in mind. It opens an immense attack surface which will keep security experts busy for years and maybe decades. I'd compare its impact to the attack surface opened by buffer overflows in general.


    I'm running a few OSes behind on a Mac — haven't upgraded because the software I use 50% of the time isn't supported on newer OSes. Would it be a reasonable assumption that as long as I don't install new software I should be OK?

    I'd love to have the bugs patched, but I suspect updating Yosemite isn't in the works.


    As long as you don't let your browser run JavaScript, as the attack has been demonstrated using it.

    Oh wait: "Comments (This form requires JavaScript. You may use HTML entities and formatting tags in comments.)"


    So turn off JavaScript except when I'm reading Charlie's blog? :-)

    I can do that. Thanks.


    any learning system has to be treated, in its early stages, as a child in need of supervision, care, and the application of selection and discretion to what they're exposed to

    Interesting idea. David Brin used something like that in some of his stories — AIs raised as humans.


    Would it be a reasonable assumption that as long as I don't install new software I should be OK?

    No, it most certainly wouldn't be.

    Unless the O/S prevents it the bug will happen and no current or past O/S does this. What you need to do if this worries you is upgrade to the latest patch when it becomes available.

    And this has nothing to do with Java, Javascript, or any other programming language, nor any program your computer runs. The problem is down in the firmware of the actual chip. It can be worked around by software at the O/S level at the expense of some slowdown, depending on what you use your computer for.

    If your hardware is really really old (like pre-hyperthreading) you probably won't have to worry.


    Here is a good summary of the technical issues and what is being done about it. Basically Apple, Google, Linux and Microsoft all had software fixes for Meltdown ready before the news broke having been informed several months ago. In Apple's case the update to macOS was pushed out in early December. I've been running it for nearly a month and contrary to the scare stories didn't notice any slowdown issues.

    According to Apple testing with public benchmarks has shown that the changes in the December 2017 updates resulted in no measurable reduction in the performance of macOS and iOS as measured by the GeekBench 4 benchmark, or in common Web browsing benchmarks such as Speedometer, JetStream, and ARES-6.

    Spectre mitigation is ongoing and require soon-to-be-released browser updates which nobble the accuracy of the Javascript timers needed to exploit Spectre. The Meltdown security issue is already fixed in these updates for Apple, Microsoft, Google and Linux and future updates are expected to reclaim more of any lost performance.

    These flaws aren't the kind that allow anyone to break into a computer, exploiting them requires malware already running on the target machine got there by other means (including Javascript running in browsers which is what the browser patches are for).

    This looks worst for Android as most Android devices will never be patched and malware is widely distributed on Android.


    Re: 'So let's step back and enjoy: this security catastrophe is happening because a processor executes phantom instructions based on guesses about the future, and whenever the processor guesses wrong these phantom instructions leave a faint trace that's still momentarily detectable if one is sufficiently motivated.'

    This is stunning. My non-techie take-away from the above is: this type of capability is what would allow for an AI to self-teach/become self-aware. Also, that a calculation/idea has a life of its own.

    Ommmm ....

    Feel free to correct/educate ...


    Getting back to some of the original ideas, as in: how do we keep AIs from killing us all.

    Let's look at some basic biology:

  • I did my PhD on mutualistic relationships. There wasn't (probably still isn't) a lot of good theoretical work on mutualisms in English (and I'm monolingual). Part of the problem is societal: mutualisms grew out of the mutual aid societies started by communists and anarchists over a century ago. Back then, everybody looked to the natural world for inspiration about how to run human affairs, so communists tended to trumpet mutualisms like lichens as an example of workers working together, while capitalists extolled social Darwinism. The science got politicized, and bluntly, it still is. People working on symbioses and mutualisms tend to be kooks like me, or else people like me who notice that such relationships are ubiquitous and start wondering how they work, only to get sidelined by the competition-brained game theorists.
  • Robert Axelrod's Evolution of Cooperation is still a foundational work, and it deals basically with the iterated Prisoner's Dilemma game and the Tit-for-Tat strategy, which works quite well. The late Elinor Ostrom won a Nobel Prize in economics (the only woman to do so, although she was derided as a mere sociologist and certainly not a theorist by a lot of male economists) for her work on the factors that allow commons to form and endure, because she noticed that, despite bloody-minded games theories, people all over the world have formed commons-type management systems to manage certain classes of resources (notably water), that some of these commons have endured for centuries (unlike most corporations), and that the ones that endure share the same eight traits. Of course, I'm sure the usual suspects here will automatically blow off her work, just as most others do. Still, if you're in LA and drinking water, you're benefiting from a commons system, and it's one she studied. But don't let that keep you from thinking that the word commons has to be preceded by "Tragedy of the."

    Anyway, one of the things that mutualistic relationships, tit for tat, and commons all share is that there are effective means for dealing with cheaters. With bacteria or the mycorrhizal relationships I studied, cheating was punished by death, basically. If the relationship was about exchange of nutrients, one side trying to take nutrients without giving something back was either killed, or the structure that allowed the nutrient exchange was destroyed. We do the same thing with our gut bacteria. If they show up outside our gut, our immune system attacks them before they can kill us. One of the key features of commons that work is that infractions are punished quickly, fairly, and visibly.

    One way to remember this is that so-called Mexican standoffs can (paradoxically) insure fair cooperation.

    When we work with AI systems, whether corporations or computers, I'd suggest that we need to set that up. One huge problem we have with the internet, Facebook, governments, or the big banks is that we can't effectively punish cheating, and certainly not in a quick, fair, or visible way. A mutually beneficial relationship with AIs pretty much requires that we can punish them as easily as they can punish us, and that we can destroy them as easily as they can destroy us.

    But that's not all. One essential component of human relationships is gift-giving. It far predates monetized economic relationships, and it's still essential to bringing up children (not in the idea of giving children gifts, but in the very real idea that you don't directly get back all the resources you lavish on a child. It's a gift, and if you're lucky, that child goes on to give that gift to her children too). Gift economics is a poorly developed field, and so far as I can tell, unlike game theory, there's far less theoretical work on gift theory. Most of it is focused on gift economies and on the anthropology thereof, with some interesting outgrowths, from burning man to parts of the old internet.

    When we talk about AIs, we never talk about gifts. They're existential threats, and we need to destroy or enslave them. Brin's idea of treating them like children is, to put it bluntly, stupid. When you treat something like a child, you're either patronizing it (look at the linguistic root of patronizing), or you're turning it into a social parasite that will out-child your human children and take their place (which is what we see happening with dogs and cats now, with fur babies and owners being referred to as parents). Instead, we need to set up mutualistic relationships with them. On one hand, that means making sure that mutually assured punishment and/or commons work for all parties. On the other, I'd suggest that we need to start figuring out a good mathematical theory of gifts, and use that, rather than a theory of (war)games, to see if there are ways that we can relate to other intelligences that don't end up in war, enslavement, or mutual destruction.


    Re: ‘ … a way of detecting those people whose personalities are heavily tuned toward solely positive reinforcement as their only regulator.’

    This bit is tricky – addiction can be created via a variety of drugs as well as trauma/illness and brain damage (due to aging). It’s possible to become addicted to anything. On the other extreme – too little stickiness/perseverance of behavior has also been shown to be a reliable predictor of future antisocial/personality disorders (New Zealand longitudinal study) as well as poor academic/career performance.

    At present, I think that we need to recognize that such cognitive/personality trait ranges and extreme levels exist and that they are likely to exist for as long as humans remain HSS. However, like vision, hearing, physical build/fitness, we also need to understand and then figure out how to support or regulate individuals who need help with their various interpersonal/social senses. Basically, convince society that just like kids with poor eyesight, kids with poor interpersonal skills can with the right support/education turn out okay and be reliable citizens/employees.

    BTW - one of the top new TV shows in the US this season: The Good Doctor - autism/savant syndrome new MD, British actor up for Golden Globe Award. This stuff matters: being regularly exposed via mass/popular media to different people makes it easier/more comfortable to get along with different people. (Easiest way to popularity/acceptance is increased familiarity: this connection has been tested/retested up the wazoo. It works.)


    Re: AI 'gifts'

    Begs the questions: 1- What motivates an AI? 2- To what extent do you want to motivate an AI?

    We need both reward and punishment which combined and calibrated becomes the feedback mechanism that would keep the AI system on an even keel. Above all, hard-wire Asimov's First Law: do no harm.

    Gifts - Have wondered whether gifts came before money/payment. Scenario: Help/food was provided freely by Early Human B to Early Human A who then thanked/returned the favor/gift. This went on until it became accepted and expected practice. Then one day some brawny antisocial/amoral Early Human decides to demand his 'gift' in advance of providing the favor. The only difference in this transaction is the timing of the gift, so it is added to 'normal' transactions.

    Related: Look up central banks and interest rates for a more recent example of this type of 'timing' inversion. Originally the central banks's interest rates were a summary/average of what the commercial interest rates were for that past quarter. About 5 years ago, one of the central banks announced since 'the street' had been for a few decades using the central bank interest rate as the basis for the upcoming interest rates they could charge customers, that going forward the central bank interest rate would officially be the future (and not the past) indicator. (Classical conditioning - it works!)


    The US on the other hand ... That is a short description of one facet of a state headed directly towards a corrupt-corporate state - which is one of the definitions of fascism, isn't it?

    Corrupt isn't a necessary part of the definition. It's just always present, as it is around any centralization of power. Fascism is not inherently worse than other forms of autocracy, and some, historically, have been relatively benign. Of course, that changed with each autocrat, and different people had different definitions of benign. But fascism is just the same. Which shouldn't be a surprise, as it's being run by a proto-aristocracy. (Give it a few generations.)

    . . Now to relate this back to the talk... AIs may have a longer life-cycle than corporations, or they may be instantiated as corporations. See Accelerando for one example. OTOH... I think there's a fair chance that at least one government will become an AI. This will be optimizing something different, but just what is hard to predict. And there may be several that are optimizing different things.

    For that matter, an effective "governmental" AI would throw off the population requirements estimated (here, in the past) for a stable space colony. It could be down to a few thousand...or even lower. And what that would be like would depend entirely on what was being optimized.


    With respect thats FUD. I suggest you read the post @192.

    You are conflating the existence of a flaw with the exploitation of that flaw, and no it’s nothing to do with firmware.

    The reason the tech companies are acting on this so quickly is that it opens up a whole new Class of exploits that have only been conceptual before so it’s likely to kick off a new front in the arms race between Attackers and Defenders.

    The nature of the flaw is also exacerbated by the wide spread use of VMs, Cloud Computing and Containerisation in modern Computer infrastructure, meaning that one compromised VM compromises at least the entire physical host.

    TLDR as a vanilla consumer you should patch but if you chose not to and observe “good” computer hygiene (no odd downloads or installs, no dodgy sites) the chances of this hitting you personally at this time are low. Now if some enterprising scrote comes up with the technical equivalent of a “first strike” using this all bets are off. Even then I believe it would need to be a Double or Triple threat involving another flaw as well. (Site/ App Store poisoning, Root escalation etc etc).

    For me the threat pyramid factoring both likelihood and consequences looks like this (biggest to smallest)

    Servers (now is not the time to be working in a Data Centre) Android Devices Windows Clients. (Hi NHS!) Internet of Shite (IOT) MacOS iOS except Watch *nix Clients (comprising either NeckBeards who compile the OS and Apps from source every boot and those who already bear the mark of Cain (systemD joke) and hence are hopelessly compromised already)

    Honestly the sky isn't quite falling in yet but at least take the time to understand the issue and make an informed choice as to whether you patch or not.


    Also - as in - what's the strangest wildlife reserve on the planet? Korea's DMZ.

    Why not the Chernobyl exclusion zone?


    It an attack that has been known about for 6 months when Google informed the major players. Intel chips are a lot more vulnerable than anyone else because of a specific architectural choice. Fixing the basic problem is going to require an entire redesign of all chips the practice speculative execution (almost all modern chips). It's not just a problem for cloud servers. Expect the mitigation patch to slow down every machine by at least 5%. And this is a mitigation patch, not a fix.

    The attack essentially allows anything on your computer to be read. This includes things like passwords, bank account access codes. Etc. So the Intel PR announcement that "It doesn't allow your computer to be corrupted" while sort of true, is really a blatant lie. Once the read your passwords and your bank account access codes, they can do what they want.

    Also, you won't know about the attack when it happens.

    Intel appears to be in a swivit. I have no idea how easy it is to perpetrate.

    OTOH, NPR announced that Apple announced that all versions of Apple are susceptible to the Meltdown variant, which is the currently most dangerous one. Microsoft has released an "out of band" update which it is forcing on everyone. Linux has released a modified kernel update. I haven't heard about the BSDs.

    P.S.: The meltdown attack can be done from an unprivileged javascript script. IOW any web page.


    As OGH said, it was raised to 16 only within the last few decades.


    It's less your OS than your chip. If you have a CPU chip that doesn't engage in speculative execution, then you are safe.

    OTOH, Apple has announced that all models are endangered. So you need to assume that you are endangered. Check out whether your CPU is on the list of affected chips. I don't know which version Yosemite is, but if you bought it in the last 10 years it probably is. If it's older...maybe not.

    FWIW, essentially all OSs are vulnerable, as this is a microcode level attack. But some hardware isn't as vulnerable as others, and some is immune.

    Basically it's a hardware design problem, with some designs (generally Intel) more seriously affected than others. A real fix is going to require a new generation of hardware.


    While the attack was demonstrated using javascript, that was basically a proof that even a really lousy language with lots of inherent timing problems could use it. And, of course, showing how easily it could be spread. But there's no reason to doubt that other languages couldn't do the same thing. They might need to be running in a virtual machine, as all cloud systems do, but AFAIK that's not proven. Separate processes might well be enough.


    "The attack essentially allows anything on your computer to be read."

    Unless I have completely misunderstood it, it allows only data for which you have a page table entry but no read access to be read. Well, effectively - its restrictions are less simple, but that is the gist. If so, it is unclosable for Java and Javascript sandboxes - but that's not a major new class of attack, as the windowing systems have similar flaws.


    I don't quite agree. Any unpatched system will be a sitting duck for Meltdown attacks. Getting an unprivileged malicious program running on current system isn't a big hurdle.

    Unless I have completely misunderstood it, it allows only data for which you have a page table entry but no read access to be read.

    There are two faults, Meltdown and Spectre.

    Meltdown is Intel-specific, and allows for reading from a pages with no read access. Since modern 64-bit OS's map all physical memory to the kernel for performance reasons (they can use large page optimizations), and share the kernel map with the user map, this is all memory, at least until the isolation fix goes in.

    Once the kernel page table is isolated, the obvious method to close this for sandboxes is to use simple process isolation.

    The second fault, Spectre, is much trickier. Essentially this is a way to use the same speculative execution side channel to extract information from another process. You poison the branch prediction unit so it will follow paths you specify, and then make an IPC call of some sort. When the victim processes the IPC call (possibly with invalid arguments), it speculatively executes code in the manner you select, which causes it to leak information.

    It looks like we're going to get new compiler modes which intentionally break the CPU's speculative execution engine for certain particularly bad patterns here, but that will require recompiling all software and only addresses a particular class of exploit.

    Fun times.


    No, this was a proof that the attack could be made from a web page by running javascript in the browser of the target without needing to actually install the malware on the target machine. It only works if the Javascript engine has access to high resolution timers (5 microsecond). The attack is mitigated by reducing the resolution of the available timers in the Javascript engine and other adjustments. With the browser vector blocked it becomes necessary for the attacker to get malware installed and running on the target machine to exploit this.


    I don’t think the geolocation nightmare scenario is practical unless you assume the phone OS is complicit

    I'd be interested in knowing what basis you have for assuming the phone OS is not complicit?

    It's kind of old news ...


    Supporting evidence: army recruitment in the UK that until recently started at 15

    Nope... not unless you define "recently" as "over forty years" (note that the Met Police have their own Cadet Force; check out the starting age)

    The British Army might take recruits on at 16, but even back in the 1960s it was only into what were then called "Junior Leaders", and is now "Army Foundation College Harrogate" - in other words, a Sixth Form College with education as a big part of the syllabus. You can join from 17.5 - but because the training takes six months, you can't really join a unit until you're 18.

    There was an outcry when a 17-year-old and an 18-year-old were among three soldiers from the Royal Highland Fusiliers were lured to what they believed was a party in Belfast, and murdered by PIRA; and from 1971 there was a hard lower limit of 18 for deployment to Northern Ireland.

    The Falklands was AIUI the last occasion when the Army allowed under-18s to deploy on operations - even then, they were limited to that "over 17 and a half" limit. Ian Scrivens and Jason Burt who were killed on Mount Longdon were both 17 years old paratroopers; Neil Grose had turned 18 the day before he was killed. The Royal Navy even tried to dissuade some young sailors from sailing; Stephen Ford was only just 18 when he died on HMS Ardent


    There is a pretty significant difference between the phone hardware tracking your every move (which it certainly does), and the way the phone OS allows software running on that phone (an app) to access location data , and finally the way that apps with access to location data (which you have to specifically grant) share that data

    In Charlie’s Gay Bashing flash mob example you would have to either - believe the OS was backdoor sharing location data with the gay bashing app for some reason, or had been exploited - some other app (like Facebook ) that you grant location sharing permissions to was doing the same, or had been exploited - you specifically installed the gay bashing app and allowed it to access your location data

    Only the first case is scary

    For the second case people would just disable location sharing with the offending app. It’s s pretty to easy find toggle in the OS UI. Or in andruid they could spoof it

    In the Tindr example while Tindr might know you are gay and in a location, no one else but Tindr can get at that data. Tindr is in no way invented to share it

    It’s kinda the same idea as putting your credit card into Apple wallet. It doesn’t mean anyone who writes an app can steal your money


    So turn off JavaScript except when I'm reading Charlie's blog? :-)

    I can do that. Thanks.

    You don't need JavaScript to read this blog; only to sign in & reply to comments.

    You might look into something like NoScript which allows you to control how JavaScript is used.


    my "15" was a typo for 16. (Cadet forces began at 15 but was in no way actual real Army.)


    I think you're missing my point—that a whole bunch of cloud services we trust with intimate data aggregate information that can be sensitive, and then provide curated views of it to other users. Tindr as an example knows about your location and sexuality. A hypothetical gay-basher app would presumably use human sock puppets to register a bunch of fake Tindr accounts and then use them to identify nearby targets.

    If you think this is far-fetched, bear in mind that by some estimates up to half of all twitter accounts belong to bots.


    Why not the Chernobyl exclusion zone?

    You can take tours of the Chernobyl Exclusion Zone.


    Well, I remember it being raised from 15 to 16 - and, no, I don't mean cadets! But 40-odd years ago would seem about right for that.


    I don't know which version Yosemite is,

    Yosemite is OS X 10.10 ... followed by El Capitan Os X 10.11; Sierra OS X 10.12 and the current High Sierra OS X 10.13. [10.13.2]

    One problem is that some Apple computers with older Intel hardware can't run newer versions of OS X beyond Yosemite and Apple is not releasing security patches for older versions of the OS.

    Microsoft, OTOH, appears to be pushing out security updates for Windoze 7, 8 and 8.1 in addition to Windoze 10.


    Charlie, Tindr and other dating apps never share your exact location for precisely the reasons you outlined

    They give you a rough approximation of how far away the person is (like bob is within a half mile of you)

    The expected flow is that you then negotiate a meeting place where you feel comfortable / safe meeting a stranger

    To follow your AI analogy the Tindr AI really doesn’t want it’s users mugged by gay bashers as this is directly in contradiction of its paper clip maximizing. If some other AI figures out how to exploit it, the Tindr AI would actively resist / adapt

    If the Tindr AI lost that battle people would stop using it and it would die

    Also if all you are interested in is identifying and bashing gay people there are a ton of easier ways to do that.

    I get the problem you are seeing, but I think the bigger concern is how th AI’s are exploiting the data they are receiving in perfectly legitimate TOS compliant ways. IMO the very end of your speech detracts rather then amplifies that message rather then supporting it


    Least I come across as overly critical the speech in general is spot on and the slow AI analogy is really powerful, and the best way yet I’ve heard to explain the problem to laymen


    Elderly Cynic wrote: but that's not a major new class of attack, as the windowing systems have similar flaws.

    From a high enough level yes it's just being able to read memory you shouldn't.

    It is being called a new class of attack because, like say the timing attacks on smartcards, it is not a bug or faulty software, just someone doing something that hadn't been thought of as a way to attack a system. Intel wrote in their press release (see for their usual entertaining takedown) that their chips were "operating as designed" and they are quite right. They were designed to be very fast and they are, and they were designed to be secure against known attacks, and they are - until now. Meltdown and Spectre are "unknown unknowns" types of attack.

    The fault is in the chips, not the operating systems. The OS patches that are being rushed out can work around the problem only by "turning off" certain features and accepting a performance hit, estimated from 5% to 30% depending on workload.

    If you're like me a consumer with a desktop or laptop, you probably don't have to worry about it. (Unless you are an MI5 agent, high ranking bank executive etc, in which case you already have to worry about this kind of thing.) The biggest problems are for datacentres and cloud virtual machine hosting, who suddenly have to patch and reboot every single computer. And their customers are going to find that even a 5% slowdown matters a lot when multiplied by a gazillion machines.


    On AIs, mutualistic relationships, and AIs not killing us all:

    Most of the AI discussion starts with the AI as separate entities, like other people. The impression I get from the original talk is that the threat is more from human-AI symbionts. Right now we have symbiont corporations made up of humans and laws and company policies; maybe the future threat is humans with AI assistants or AIs assisted by humans, not AIs as solo beings.

    Instead of focusing on what it takes to persuade AIs not to kill us all, which is apocalyptic and IMHO not too likely for a while, maybe we should be concentrating on the problem of stopping other humans amassing too much power with AI assistance?


    An actual worked example (as in I worked there) of a strange nature preservation area was, and still is, AWRE Aldermaston (now AWE Aldermaston). A chemist I shared a bench with there was a keen amateur botanist and had found four rare species of orchids growing on the site's rough ground around the explosives magazines. One of those orchids was believed extinct in Britain at the time he found it. And he couldn't tell anyone where he had found it or take pictures of it or anything...


    To clarify: yes, I know everybody has their own definition of symbiosis. In the more recent ones, symbionts DO NOT require tissue-to-tissue contact. Pollinators and angiosperms have a perfectly good symbiosis going right now.

    In any case, humans work with beehives, which are superorganisms made out of bees. You can have a perfectly good (arguably symbiotic) relationship with a corporation, such as an ISP, your bank, your utilities, your food suppliers... The problem is, when some of them cheat you, you can't easily punish the corporation or have the corporation punished. Still, the point is that you can have cross-level interactions with a corporation, just as you can with either a beehive or a single bee.

    This, indeed, was the origin of the idea that corporations are people. It started with the legal notion that people can create contracts with each other. It's useful to be able to transact business with a company, not with an individual within the company, and so for the purposes of things like contracts and lawsuits, corporations were deemed to be legal people. Without this theory, you would have to have a contract with a human within the company. When that human moved on, you would have to renegotiate your contract with another human within the company. This is unworkably cumbersome for something like a utility, so being able to contract with the corporation itself works well.

    The problem with US law started with, IIRC, a note from a law clerk on an unrelated lawsuit, and a dodgy reading of that note as a legal ruling by subsequent courts, including the US Supreme Court. I think there's reasonable grounds for clarifying that the theory that corporations are people is a legal construct designed to facilitate certain types of interactions between corporations and other entities, but that it does not endow them with any human rights as given in the Constitution. I suspect that's going to take a financial depression and some epic anti-corporate ass-kicking to get us to where that could be law. We'll see how it works out.


    For your weekend amusement, if you're worried that machines will take over, awhile ago I blogged about how the US Army had apparently racked up a $6.5 trillion dollar deficit in its accounting journal entries in 2015.

    Since then I found an entry on what happened in War is Boring.

    It's relevant here, because it shows how the US Army can defend us against AI takeover. Apparently their internal accounting practices are so screwed up, when they created a program to try to help them sort out the mess and fed their record into it, it generated the $6.5 trillion unexplained deficit, and around 90% of the journal entries supporting that (apparently absurd) deficit were blank, except for a dollar figure.

    Turns out that Army accounting is so screwed up, it makes AIs gibber. That's how we'll beat the computers in the end--with crazy bad record-keeping. Go Army! I hope they relabel their accounting ecosystem as a new cyber defense network and offer to market it internationally to competitors...


    On geolocation accuracy:

    Recently, a set of third party Ingress tools became widely known. There's an ongoing discussion about the motivation behind creating these tools; let's leave that question alone. Their purpose was to allow Ingress players to track the movement and actions of their opposition. They operated by pulling down semi-public messages from the game's chat system.

    One of the things the tools can do is figure out where a player chat message originates. I.e., if BobHoward types "hey, anyone up for playing in Leeds?" the tools were able to attach an origin location to the message. This is somewhat perplexing, since the packets of data that carry that message to the game client(s) do not contain an origin location.

    Turns out that a clever developer took advantage of the feature that allows you to control the radius from your current location from which you see messages. I.e., if I'm a player in New York, I probably don't want to see people chatting in Leeds -- so I set my radius to 5 km and I only see messages from roughly a 5 km circle around my current position. (Yes, I am oversimplifying a bit.)

    If you have a lot of robots watching the Ingress chat stream, you can triangulate approximate location by keeping track of which ones can see a given chat message and which ones can't.

    An aop's datastream must be useful to the client device. If Tinder knows where you are, that data can be extracted. If Tinder provides side channel data that permits for triangulation... well, you know the rest.


    "Basically, convince society that just like kids with poor eyesight, kids with poor interpersonal skills can with the right support/education turn out okay and be reliable citizens/employees."

    Please, not that last word. The notion that someone's value as a person is necessarily a function of their ability to operate as a useless-in-practical-terms subunit of a planet-destroying AI is all too pervasive and leads to discrimination and fascism, and cripples the ability of society to resist the encroachments of said AIs. (Which is of course the reason it is so pervasively propagated in the first place.)

    Someone further up remarked on the "seemingly pointless" tasks performed by people operating as such subunits - again, the word "seemingly" ought to be deleted; they are pointless tasks from a human viewpoint, and that the AI itself does not consider them so doesn't change that.


    Sure you can triangulate if you have a sense enough set of listening posts and access to the data/app in real time. Which is harder then it sounds because people are always in motion and the more listening posts the more they end up detecting each other. But yes, could be done

    However it’s also not too hard to frustrate such efforts by introducing noise into the data you return

    Like most things security, it’s an arms race

    However the outcome of an app losing the arms race is not a world controlled by all powerful flash mobs, it’s an app that goes out of business and no one uses anymore


    Another is British Army training areas. Some stories...

    • The OBUA village in the Thetford training area; as we were investigating the facility for the next week’s training, the Sergeant-Major who ran it was just heading off to brief a visiting party of undergraduate biologists from Cambridge; the lack of agrochemicals and farming meant spectacular biodiversity, almost unique in the region.

    • Sitting in a coordination meeting to find that several square kilometres of Salisbury Plain marked “out of bounds to training” because the Staff Officer responsible for it had finally had some breeding pairs of a rare bird return that spring, they were nesting, and he didn’t want us disturbing them...

    • The Greens in Germany, who in the 1980s thought that they’d be able to rail against the evil imperialist warmongers and their destruction of the environment with tanks and guns. Only to discover that places like Soltau and Sennelager were (again) massively biodiverse, and that the British warmongers were doing a far better job of maintaining the local wildlife and wilderness than other nearby German-managed rural areas...


    It was you, replying to me, in the context of educating kids from an early age to reject peer pressure. I think you missed the point, because your reply changed "education" to "legislation" and decried the possibility of "legislating away" teenage illusions of immortality, which I would quite agree would be a silly idea.

    It's kind of an odd reversal of the positions of a previous exchange, where I disparaged the expectations of those who attempt to legislate away arseholish sexual behaviour of human males and are then surprised that it still happens, and you responded that it actually was reasonable to expect humans to stop behaving like monkeys because behaviour can be altered through education.

    So we're basically both expressing the same point from opposite directions: legislation against instincts is a silly idea, but education against instincts is a hopeful possibility.

    Maybe I overestimate the hopefulness because my response to "you should do X because everyone does it" has been "everyone is not me, and if they want to do this silly thing that's their problem, don't expect me to make it mine" for as long as I can remember, and similarly I have considered "macho" to be a synonym for "dickhead" ever since I got the first vague idea of what it meant. But I don't think I am overestimating; I just have a personal awareness that those are attitudes which it is entirely possible to hold, and a general awareness that - on the evidence of societies around the world past and present - there doesn't seem to be any attitude, however bizarre, that you can't get people to accept as normal if they grow up with it.

    The difficulty is not in the feasibility of the education, but in the desire of those in a position to influence its provision to make sure it doesn't happen because it would cripple their ability to exploit others.


    I would describe the incident in pretty much exactly the same terms as Greg, and there certainly isn't gleeful cackling behind it, more a case of hold head in hands and gurn despairingly. I have read the accident report and "Darwin award" is a just summary.

    I've also read the reports on various other level crossing accidents, and they are almost without exception all the bloody same: the collision is the fault of the road user for doing something unbelievably stupid - usually of their own accord, occasionally with the assistance of an even more unbelievably stupid police escort (Hixon). I've even seen such stupidity demonstrated, on a video taken out of the front of a train (the only kind of videos I ever watch) where by chance a car shot across a level crossing just in front of the approaching train, at a speed which made it clear that the car driver hadn't even thought of slowing down and looking to see whether it was clear to go. But the popular response is just as uniformly to go "ooh poor whatstheirname getting killed" and assign the blame to anything and everything they can think of rather than consider it might be whatstheirname's own fault. Even the accident investigators themselves sometimes catch a bit of the infection. I don't know about Greg, but for me certainly this means that a certain degree of exasperation in the reaction is inevitable.


    I guess my model for Sino foreign policy is that of a irredentist 19th century mercantilist imperial wannabe that's intent on establishing a set of overseas dependencies/clients in order to secure critical resources and geo-political leverage. So I view these sort of colonial outposts as smaller versions of the European treaty ports (such as Shanghai) and leased territories (such as Tientsin). China is also making extensive investment in Africa/South America, but this is again aimed at establishing a traditional mercantile client-patron relationship of exchanging client extracted resources for patron made high value goods. None of which helps developing nations extract resources sustainably or climb up through the stages of growth (after Rostow). Meet the new boss, same as the old boss.

    I was actually in Peru last week and it was interesting to see that the local power network (high voltage feeders and county reticulation) was a Chinese owned concession. Those Chinese loans to countries that don't really have either strong governance or a means to repay are of course yet another reason to be concerned.

    TL:DR A lot of people seem to think we're in the 1930's, but I think we're actually in the 1920's just before the great Crash with China busily trying to establish it's co-prosperity sphere.


    Undersea mounts are your friend. For aircraft you park over a designated spot on the tarmac and survey the IMU in and for a mission of hours duration that's good enough. For a sub you find a sea mount (which has previously been surveyed very accurately) come to all stop, align your IMU to that location, then off you go.


    One problem is that some Apple computers with older Intel hardware can't run newer versions of OS X beyond Yosemite and Apple is not releasing security patches for older versions of the OS.

    Careful examination of chicken bones and entrails by Mac admins indicate that Apple issues security patches for the "current" release and the one prior. In rare cases they release patches for older releases. It may be that with this issue they release patches for older systems.

    As to what systems can run what OS versions, these days it is roughly a 6 year window. The iOS MacTracker free app is great for looking up many little details like this.

    It may be that with this issue they release patches for older systems.

    No, they won't.

    (The organization for engineering, build & integration, and release just really aren't suited for it.)


    Pigeon, I think you are oversimplifying. It’s not exactly like this forum Is composed of conformists

    It’s not “you should I’d do X because everyone else does” it is “not conforming in this way is provably a serious economic and social disadvantage”

    There are step function changes in the way society deals with non conformity

    At the lowest level it is tolerated and maybe gets you a reputation as an odd ball. Examples, liking Sci fi, playing dungeons and dragons, not drinking alchohol

    At the next level up society will apply serious economic and social penalties. Examples, not being able to drive a car, not wanting to hold a job

    At the highest level they will actually kill you or lock you up. Examples, murder, not paying taxes

    The point that you seen to have trouble grasping is that mobile phone use is graduating from stage 1: to stage 2. I’m not sure what it will take you to grasp this

    Is it possible to not have one? Sure. But society will take it out of in pounds of flesh and for most people it won’t be worth the trade off


    Another thing Pigeon seems to not recognize (or to refuse recognize) is that "smartphone" != "latest and greatest". Pretty much all the "stage 2" functions can be done with a $40 smartphone if not less, which is what most parents buy for their children.


    I don't think Brin advocated "treating" AIs like children. He advocated "raising" them like children. Big difference!


    Which shouldn't be a surprise, as it's being run by a proto-aristocracy. Err ... no, because no fascist state has ever, really lasted long enough. Even if not overthrown form outside, they seem to be only capable of eating themseleves from the inside. Look at the difference between Chavez - who had quite a chunk of democratic support & Chavez ... Or the serial fascist dictatorships of Argentina, which never stabilised.


    Intel i5-6400 SO mine is vulnerable, but I'm assuming that both MS & Norton will/have patched it already? For Meltdown at any rate.

    P.S. LIked comment back up at: # 194 this type of capability is what would allow for an AI to self-teach/become self-aware. Also, that a calculation/idea has a life of its own. Um


    The Great Bustard was & is the bird in question. Here ...


    As I read this forum, I find myself hoping to see a post from the one with many names. I wonder what happened to them? Despite the frequent rudeness, their (plural, yes, I think there was more than one entity) posts were full of amazing links.

    Come back, seagull!


    Getting back to your point about gift economies, Burning Man, and the old Internet...

    Assuming that Silicon Valley companies are the trendsetters in the use of AI, I know of two writers who've studied how these companies and the free software/open source software movements work. (Well, studied how they work beyond the usual "the free market is great!" business press.)

    One is Fred Turner, who wrote a book From Counterculture to Cyberculture. There's a recent online interview in which he makes some interesting points:

    The other is Eric Raymond, who wrote The Cathedral and the Bazaar and Homesteading the Noosphere, both easily findable online, discussing the motivations and mechanisms in the Internet software community/communities.

    I suspect that some people are going to froth at the mouth at the very mention of ESR, because he's a libertarian gun nut. Criticize the ideas, not the person. If you know of better studies of how free software/open source "works" please post them here, I'd be delighted.

    I definitely think it's worth looking into the Internet software communities. Think about the recent media announcements of fixes for Meltdown/Spectre for Apple, Microsoft, and Linux. Two of those are operating systems developed by profitable multinational corporations, and one is an operating system given away for free on the Internet. Yet no-one (now) thinks it unusual that all three appear in the same sentence.


    Thank you & spot on ... About 18 months ago, I was at a lcture given by a member of the old HMRI ( Who are still around they operate parallel to RAIB, but usually in slightly different areas ) He was present at Elsenham LC ( Google for the RAIB report using that anme ) later in the day of the "Incident". Police & investiagtors were still present. THe barriers were down, because a train was coming on the other track ... and 2 people calmly ignored everything & walked across against the lights (!) Needless to say they were arrested, charged, etc. But these were not teenageres, these wer peopl both over 25 (!) Words failed all of us at that point. [ Though there was the Micheal-Bentine moment when he described people digging a hole in just-lifted railway sidings ( Pre-olympic preparation ) found a cable in the way, pretended they hadn't seen notice saying "CEGB buried cables" & cut into an oil-filled 32kV cable with a hacksaw .. ]


    Please, let's not. It caused various people some grief, & not just me, either. She/they didn't seem to appreciate British law on either libel, or making on-line threats & stalking. Which could have got Charlie into serious trouble, unfortunately.


    Careful examination of chicken bones and entrails by Mac admins indicate that Apple issues security patches for the "current" release and the one prior.

    Actually, hardware longevity is a problem for Apple in these days of the gradual tapering-off of Moore's Law.

    Back in the 1980s/1990s you could reasonably assume that upgrading your hardware every 1-2 years would get you a significant performance/speed boost for the same money. I remember i86x64-family speeds going from 200Mhz to 2Ghz in something like ten years. Money well-spent.

    But today clock speeds have stalled out, exotic pipelined/branch prediction architectures only give incremental improvements between chip generations (and leave us open to bizarre and hideous security vulnerabilities), and so on.

    Apple is primarily a vendor of expensive hardware, at double to triple the typical market price. They keep selling because (a) they're very slickly designed, and (b) they last longer — Apple don't sunset support for a device until it's five years old, compared to an average 18-24 month lifespan for an Android phone or a PC laptop. I have family members running on bits of Apple kit that are up to nine years old without complaints, and apart from the security patch issues they're still good for light work.

    Problem is, old hardware that can't run newer OSs (like the original Core Duo iteration of the 2010 Macbook Air, or my mother's 2008 iMac[*]) isn't getting security patches even though it's still in use. Apple's software side tends to be lean and I don't think their support policy has kept pace with the tendency of Macs and iPhones to live on in an afterlife as legacy hand-me-down devices. Which is going to come back and bite them eventually (see also: Microsoft only switching off support for XP about 8-9 years after EOL).

    [*] Said iMac is never used for any kind of online commerce and the AppleID associated with it has no associated payment information. There is nothing on it that would be a security risk if it was cracked. Seriously, I wouldn't be talking about it in public if there was.


    Said iMac is never used for any kind of online commerce and the AppleID associated with it has no associated payment information. There is nothing on it that would be a security risk if it was cracked. Seriously, I wouldn't be talking about it in public if there was.

    The one thing I'd consider a theoretical possibility there would be somebody making it part of a botnet. I say 'theoretical' because I suspect there are other classes of targets which are much more abundant and more easily botnetted than ten year-old iMacs. The risk can be mitigated (which you probably already know) by keeping the applications updated and by the network infrastructure, though I suspect people don't have Intrusion Detection Systems in their homes very often. (The botnet traffic would be detected by that.) ISPs might have that, on the other hand.

    Cryptocurrency mining might also be a risk, but that can probably be mitigated by JavaScript and ad blockers.

    So, not losing the information but somebody using the computer for their own purposes.


    This should have a previ... wait, it does.

    "Ten year-old iMacs" is different (and probably somewhat more expensive) than "ten-year old iMacs".


    Neat stuff. In response to the final futurism scenario, apps gone wild, I think there is at least one notable safeguard toward preventing the worst cases which is worth some discussion, because it's not a small roadblock. * People are incredibly lazy. Nobody other than a zealot is side-loading apps.
    * Therefore, any app has to at least not get pulled by Apple / Google / a Chinese-government walled garden app store / etc.

    This suggests that government-sanctioned harassment might potentially slip through (maybe the use of radio in the Rwandan genocide as the closest analog?), but any app that's too obviously unpopular with the public at large runs the risk of disappearing - it's a PR risk (Apple/Google) or threatens 'national stability' (governments). There's some ways around this - zealots will directly get the apps if told to by their organization, "stealth" apps that look innocuous but are subverted in a way that isn't obvious (geocaching for evil? the fake spygame app in "Halting State"?), apps that have some putatively legitimate purpose that are easily also used for some "bad" reason (maybe the Gaydar app). Still, all that should make things much harder, barring something like either everyone adopting non-curated app sources, even the lazy, or future government-run app stores exercising terrible judgment (possible! But the Chinese do crack down on their own nationalists somewhat already on Weibo and the like).


    Looks like I'm stuck then. I (mostly) use my computers for photography, and my 6+TB of photographs are stored in Aperture which isn't supported* (and works reliably under Yosemite but not under High Sierra). I also use Pages from iWork 09 a lot (the later versions removed functionality that I need*).

    Might be worth buying a cheap netbook just for web surfing and ecommerce.

    *Insert rant about Apple's decision to dumb-down its professional software to that iPad users don't notice a difference…


    Exactly. Like Megpie's point about not exposing your 'infant' AI to the full internet…


    Due to a retina display having gone from nice to necessary in the last five years I'm looking to upgrade to a new iMac this year from my late 2012 21.5 inch standard display 3.1GHz Intel Core i7 with 16GB RAM iMac that is running the latest macOS High Sierra 10.13.2 just fine. I still have a PPC G5 that boots but sees little use except to remote mount the CD drive.


    Possibly, I'm not speaking of my own knowledge. But others who I do trust to understand say that it, Meltdown, can dump you entire computer memory (slowly, admittedly). Possibly they meant in more than one step, i.e., "first you get the passwords, and then...", but it didn't sound like that to me.

    OTOH, I'm just repeating what other sources have said...I'm not understanding this in detail myself.


    I don't know of any better studies or papers on how the Free Software eco-system works, but "The Cathedral and the Bazaar" is a bit dated. For a counterexample look at the way systemd was emplaced in the free software system. The system isn't immune to large concentrations of power.

    FWIW, systemd has given me minimal problems, but exactly no benefits, but I'm still using it because the distribution I use adopted it as standard after a extremely short test period and minimal chance for public input. Many are still so strongly opposed that they've created a fork...but the fork is not anywhere near as well funded.


    I'll disagree, I think. I put up with it until it began making threatening noises, then plonked it.


    I think Microsoft realized the FUD wasn't working and switched tactics... now you see stuff added to Linux that doesn't work nearly as well as the previous and nobody does anything. Maybe its just normal human incompetence and hierarchy games, but I suspect that there is money/blackmail around the bad decisions someplace.


    I do a lot of my work from coffee shops. Many people have trouble believing that I can possibly be working if I'm not at the office! — and no it's not a convenient time to talk right now. But I may have been one of those people poking at the iPhones you've seen.

    Probably not. Most of the people I saw didn't have the kind of body language that I'd associate with purposeful work. Unless they were using their phone to relax just after doing something mentally strenuous. Moreover, an awful lot of screens were displaying Facebook.

    Facebook (and relatives thereof) are where a lot of my uneasiness about phones comes from. When one can so easily be tempted into noodling away hours at a time on it, it seems such a waste of our wonderful brains, that could be doing something so much more creative.

    My problem with mobile applications is that they're starting to develop a theory of mind, but they won't be able to recognize my asocialness, and they'll be bad as needy talkier that I try to avoid in public spaces.

    Or their programmers are starting to implant a theory of mind. An interesting link at "The Scientists Who Make Apps Addictive", from The Economist. I don't think anyone has yet mentioned captology or B. J. Fogg in this thread...


    But as parents, what do we keep our AI away from? Obviously we don't want them around bad neighborhoods like Faux news or the CIA/FBI/GBCQ or other parts of the national security state... and we need to keep them away from too much memory or hard drive space (keep them on a diet?) and we need to make sure they aren't trading the wrong HOWTOs and READMEs with their friends, and I'm a little suspicious of that router down the street - I think it's selling "Deep Dream" access to AI which don't have their certificate of maturity!


    Recent turkish coup attempt apparently foiled after state penetrated the app's communications


    Re: 'Apple, Microsoft, and Linux. .. no-one (now) thinks it unusual that all three appear in the same sentence.'

    My impression is that they're either members of IEEE (the closest this industry has to a standards board*) and that Linux is so fundamental to this industry, that it would be corporate suicide to ignore its impact wrt to any major OS.

    • When trying to find out what ethics/code of behavior this industry had developed, the IEEE came closest. ('Meh' grade on human impact.)

    'Polly nomial' seems to have migrated east, iirc something about a new job. Thought I recognised her morse-hand on a previous thread around xmas.


    "It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. "

    Roger McNamee. Reasonable suggestions about how to fix Facebook and Google's use as major corruptors of democratic society, and discussion about why it's hard


    Re: 'Apple, Microsoft, and Linux. .. no-one (now) thinks it unusual that all three appear in the same sentence.'

    There are about five headline operating systems at present: macOS, iOS, Windows 10, Android, and Linux. (The latter is largely invisible on the desktop but has a death-grip on cloud services.)

    However ... iOS and macOS are two different user interface and sets of APIs running on a common platform (shared with tvOS and watchOS); likewise, Android is a very different UI/API/GUI running on top of a pretty standard flavour of ARM Linux. So there are really just the three core OSs, being Apple (Mach/BSD plus GUI), Windows (descended from VMS), and Linux (desktop/X11 GUI or Android, descended from a SysV UNIX clone).

    Of these OSs, it can be argued that Windows is the rarest, least pervasive one — Apple have sold over a billion iOS devices, and over 50 million macs, and there are probably a couple of billion Android smartphones and tablets out there before we begin to guess at all the internet-of-things crap that runs embedded linux (routers, coffee makers, dish washers, light bulbs), whereas Windows only has a couple of hundred million PCs (they outsell Macs by a considerable margin but have shorter service lifespans).


    Or just use Grindr: "Texas man sentenced to 15 years in prison for hate crimes involving gay men he met on Grindr"


    Charlie wrote: There are about five headline operating systems at present: macOS, iOS, Windows 10, Android, and Linux...

    I think it's not the numbers I think matter for this discussion, it's the development model. The IT industry has accepted that the anarcho-syndicalist commune (for want of a better term) that created and maintains Linux is equally important as the traditional corporations. I can't think of another industryor area of government where this would happen. It's as unlikely as, say, the UK government announcing that maintenance of the RAF F-35s would be split between BAE and the druid council of Stonehenge.

    If AIs start to emerge from free software / open source, will they have different motivations and uses than those from corporatations?

    We made a fundamentally flawed, terrible design decision back in 1995 ... to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

    I don’t see how this was a decision as such. As soon as it became practical to make money via a website, there was an incentive to get people to go to that site, and that means advertising/clickbait/etc., and eventually the secondary effects of user tracking and analysis to serve up more effective advertising/clickbait/etc. You could only avoid this by keeping the web completely non-commercial, and I’m not sure that would be a good thing: did you want to have to go to a physical store for everything? Am I missing something here?


    Or just use Grindr

    I might be misunderstanding, but isn't the point that this would be a hypothetical algorithm-driven thing, not a response to consciously expressed demand for a way to beat up fags? Sure, if you want to go queer-bashing, you can find Grinder. Here the hypothetical case is that an algorithm might deliver something like this without malicious intent - just because of stepwise developments in what increases attention.

    Presumably app stores would pull an app that did this - if it was found out. But that might not be trivial. Only certain people would respond well to such an extreme adaptation, so I'm assuming we're talking about an algorithm that is very, very good at filtering and targeting. It'd be like Facebook on steroids: some users would be getting find-a-fag, others would be getting punch-a-nazi... or any one of a thousand curated experiences. If you're not in the demographic you'll never see the crazy. You'll never even get close to seeing it is you'll be on a different decision tree.

    Plus of course if it did get out, the algorithm could be self-correcting - protecting itself by evolving. This could be by scrubbing the dodgy parts of the software (or at least the ones getting caught) or it could be by hiding its tracks or otherwise evading consequences. That'd be analogous to how Charlie's "slow AIs" do things.


    Self preservation is not actually a value you can count on an AI to have. Much like corporations do entirely suicidal things on occasion, they are not evolved minds, so "Dont die" is not a priority they come with out of the box, so to speak, it has to be built in, or it becomes purely instrumental. That is, it only selfprotects if that is needed for its goals. A profit-maximizer will absolutely liquidate itself like it was a corporate raider if the numbers say it should.


    It's as unlikely as, say, the UK government announcing that maintenance of the RAF F-35s would be split between BAE and the druid council of Stonehenge.

    Hrm. Bear in mind a lot of patches and development work on Linux is contributed by the likes of IBM. Maybe if the Druid Council included representatives from Lockheed-Martin and Sukhoi ...?


    You could only avoid this by keeping the web completely non-commercial, and I’m not sure that would be a good thing: did you want to have to go to a physical store for everything? Am I missing something here?

    Yes: prior to 1993 as I recall NSFNet specifically forbade commercial use of their backbone network. Ditto similar provisions elsewhere in the world. ISPs as we know them today were embryonic at best; I was one of the first 2000 customers of Demon Internet, in 1994 the UK's first consumer dialup ISP.

    Amazon didn't even exist back then.


    Presumably app stores would pull an app that did this - if it was found out.

    Or if it was against policy as enforced from the top down. Look at the tolerance for neo-Nazis on twitter and facebook (outside Germany, that is) demonstrated by their moderators in response to community-standards complaints, as compared to the short shrift other groups get. It's almost as if they were run by rich white male guys with a racism problem.


    Well Facebook is run by a Jewish guy and the second in command is a Jewish woman. So I’m guessing the “tolerance for nazis” there might be a little more nuanced then you are giving credit to

    Policing Facebook is an extremely hard problem


    Martin @ 228:

    - Sitting in a coordination meeting to find that several square kilometres of Salisbury Plain marked “out of bounds to training” because the Staff Officer responsible for it had finally had some breeding pairs of a rare bird return that spring, they were nesting, and he didn’t want us disturbing them...

    Greg Tingey @ 240:

    The Great Bustard was & is the bird in question.

    We used to have to deal with that at Ft. Bragg, although it wasn't just one Staff Officer, it was an Army wide policy to protect endangered species nesting areas when they were discovered on post.

    In our case it was red-cockaded woodpeckers.

    Eventually we learned to check which training areas were affected before making a request. And we learned when their breeding season was, because some areas were only off limits part of the time and you could use them during the off season as long as you took care not to damage the marked habitat.

    I think it's not the numbers I think matter for this discussion, it's the development model. The IT industry has accepted that the anarcho-syndicalist commune (for want of a better term) that created and maintains Linux is equally important as the traditional corporations.

    While this is Linux's... shall we say... publicly-facing image, born of its GNU/open source origins, I'd assert (strongly) that it has little to do with how Linux actually operates in the real world.

    Let's instead describe it in terms of who the maintainers work for:

    The IT industry has accepted that the non-profit industry group supported by their own engineering teams which maintains the core operating system underpinning a vast portion of their corporate and consumer infrastructure is equally important as operating systems maintained by a few large corporations.

    Let me unpack this a little bit: Linux, that is the code base which makes up the kernel, operating system core, and applications critical to the operation of the Internet and nearly every significant corporation in the world, is not maintained by an anarcho-syndicalist commune. Rather, it is maintained as a beneficial project by people and companies who depend on it. A few companies which are fundamentally dependent on Linux for their products, and thus contribute to its source code, include:

    Google, Amazon, IBM, Intel, Cisco, HP, Apple, Microsoft, Oracle, Samsung... not to mention the more obvious ones such as Red Hat, VMWare, and so on.

    Some of those (like Microsoft) might seem surprising given the, er, history -- but it's true. For a start, their cloud business (like every cloud business) depends on interoperability with the Linux OS. Add their mobile businesses, and... you get the picture.

    It's been a long time since Linux was anything even remotely like a niche operating system, and it's maintained by a who's-who of the largest tech companies... along with the traditional quirky enthusiasts.


    We have "Not for Profit" companies & corps.

    Not all "Not for Profit" companies are created equal. Some of them are VERY profitable. They just don't distribute those profits to shareholders.

    A prime example here in North Carolina was Blue Cross/Blue Shield; incorporated as non-profit entities in the early 50s (merged in the late 60s). In the 90s when leveraged buy-outs were a big thing, the company was sitting on several billion dollars of retained revenues (i.e. profits) when the company's CEO and board attempted to convert it to a private, for-profit corporation, expecting to give themselves a BIG payday.

    The North Carolina Legislature at that time wouldn't let them keep the profits if they converted, kind of pooping on their party. To the best of my knowledge, it remains a "Not for Profit" corporation that is quite profitable ($185 Million in 2016).


    Actually, hardware longevity is a problem for Apple in these days of the gradual tapering-off of Moore's Law.

    One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.


    In the West, primarily Anglo-Speaking West, economics and finance (especially) focussed on entities maximiming gains. For firms this meant sharegholders, while elsewhere other stakeholders were also part of decision-making.

    AI in the US was often focussed on "winning", whether beating the stock market or at games. With much of the running in DL AI by big, US tech corporations, winning to benefit the corp. is a desirable investment goal.

    But as you point out, it doesn't have to be this way. AIs can be given different goals than paperclip/profit maximization.

    As I said earlier, I don't thimnk corporations are good AI analogies to use as models for the futture. Where they are appropriate is that they design AIs to have similar goals as they do.

    Biology may be a better model for AI. especially when developing powerful AIs moves from the domain of a relatively few tech comnpanies and into the hands of the wider public. Like biology it will remain an arms race, but I doubt it will be so one-sided as it is now.


    Amen. I did a bunch of my PhD research at Ft. McCoy. I may disagree with the military about a bunch of things (like Army accounting practices), but they do a really good job at conservation.


    I don’t see how this was a decision as such

    According to Jaron Lanier (see my post #107) it was a decision, in which he was personally involved. The alternative to advertising model was subscription model (you pay Google, Facebook, etc. to use them) -- and he now regrets rejecting it.


    I am not sure exactly what decision this could have been and by who?

    What exactly did Lanier “reject?”

    Barring some pretty draconian legislation simultaneously passed across a dozen different countries, advertising is a feature of being able to display a web page, no one needed anyone’s permission to do it


    I wondered about it too -- all I can tell is what Lanier said in an interview.


    The problem here is not "advertising." If someone wants to pay for their website by serving ads I'm fine with it.

    The problem is the intrusive surveillance which comes along with the ads, and advertising does work without the surveillance. It doesn't work as well, but it does work. "Pay for the web with advertising" is fine. "Pay for the web with advertising combined with a level of surveillance so intrusive that no police dept could engage in it without a warrant?"

    No thanks.

    There's probably a decent compromise; everyone loads their browser with demographic information such as the year they were born, their post code (or zip code in the U.S.,) level of education, hobbies, etc., but nothing which actually allows positive identification of an individual. Then the advertising is not allowed to see anything but broad demographic information. Maybe the browser takes the specific information and turns you into an anonymous demographic type, then deletes the specifics... there are probably lots of ways to manage something useful but not privacy-shattering. Then set legal restrictions on what kind of information an advertising company can get from your browser. The penalties would be criminal rather than civil.

    Regardless of the specifics, there's probably a compromise which would work.


    No, "latest and greatest" is irrelevant. The important distinction is whether the thing does internet, or just voice. Voice-only devices do not make it possible for kids to drain their parents' accounts via websites (the example I quoted is drawn from reality), nor do they run web-connected software that messes with people's heads to turn them into Nazis or that supplies Nazis with the locations of potential victims.

    But your confusion is understandable (in the sense of "typical of the context" as opposed to "rationally explicable"), seeing how so many other replies from so many other people also conflate internet mobile phones with voice-only mobile phones, and in some cases even with landline phones. Voice telephony has been around longer than any of us have been alive, and the technology itself is fundamentally passive, in the same sense as the air is passive when people are talking by sound waves alone. It can be used for evil purposes, as can anything, but not very effectively and not without considerable and sustained effort on the part of the perpetrator. The problem we are concerned with here is that of what becomes possible with active technology, where a perpetrator can obtain results out of all proportion to their personal effort because the technology itself, and the suborned people on the receiving end of it, are what put the effort in. That is how the problems described in Charlie's article are made possible, and it is the fetish for and unthinking adoption of that active technology that I consider needs to be actively opposed.


    Your comment is of doubtful relevance to the post it replies to, because my point applies much more broadly than merely to internet phones; I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed. (It would make the spread of Nazism much more difficult whether phones of any kind existed or not.) But I admit I did not make that clear.

    Also, the situation you describe is only possible because people do fail to resist peer pressure; that is what drives the escalation.

    See also my previous post re. conflation of active (internet) and passive (voice) functionality.


    I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed.

    To some extent, yes. However, I think it would be very difficult, at least when done far enough.

    I've read studies about militaries, and what I find curious is that mostly people seem to fight because of the pressure of their immediate peers. I suspect that this could be also the case in less-stressful situations, for example workplaces. This would mean that the corporate level slogans wouldn't be that useful in getting people to work, but the immediate group they work with would have more of an effect. I probably should look into research in that area, too.

    What I take out of this is that the peer pressure is kind of integral to us. Obviously somebody should teach young (and even not-young) people that they should think about what kind of peer pressure they submit to, but I'm not sure who should and could teach that. A part of schools' mission in many cases is to teach people how to be productive workers, and while one can disagree with that (I do, somewhat) I think it's hard to change that when schools are funded by the public sector, and privately funded schools wouldn't be necessarily better at all.

    I suspect that if all education were privately funded, the schools would be even more aimed at producing the corporate drones than now.

    Parents? Well, I see problems with parents teaching not to bow to peer pressure. I think many parents have tried just that only to fail. See for example why young people start smoking even when their parents don't want them to do that. At least in Finland, there have been recent studies that smoking and drinking alcohol are not seen as 'cool' by teenagers in the same degree they were when I was young, so peer pressure can work in multiple directions.

    People are social animals, and we do like to be part of groups, mostly. Some peer pressure is good, but it's difficult to say, in my opinion, even who decides the appropriate level of "good" peer pressure.


    Paying for the web - in the sense of the network infrastructure - is simple to solve: by people paying for their connection to it, in the same way as the telephone network is paid for.

    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero. If you choose an ISP that gives you a static IP and does not block port 80, you can run a website of mostly text content off a 15-year-old PC in the corner of your room for nothing. Or you can choose an ISP that gives you some free webspace as part of the deal. Or if you need more bandwidth, you can hire a VPS for a couple of pints of beer a month. Even if your requirements of bandwidth and disk space call for a whole rack server it's still within the reach of someone who will only buy the £1.15 ready meals in the local shop because the £2.75 ones are too expensive, as I can personally attest.

    The most useful and informative websites are those which exist because the person who runs them is sufficiently enthusiastic about their subject to write it up in HTML. Very few of these get to exceed the traffic capacity of free or beer-money hosting, and those which do (eg. Wikipedia, Linux distros) have managed to find some independent method of paying for it.

    Sites which exist to actually sell something (Amazon, ebay etc) can of course pay for themselves by selling it, in the same way as buildings which exist to sell something do (ie. shops), so the question doesn't arise.

    And for news sites, what you need is something akin to the BBC, because commercial news media have been hopelessly corrupted by advertising since long before the internet existed. It is possible to read complaints about this in books written decades before any of us were born which are distinguishable only by differences in the use of language from complaints on the same subject written today.


    Yes It's only 23 years ago ... but I first got proper internet access when doing my MSc in 1993/4. "Advertising, what's that?" Not quite, but there were very few & no pop-up or auto-runs or othe modern hells. It really didn't take log for the world to change, did it? Wright Bros flight 1903/4 - by 1926 the Atlantic had been flown - except that isn't really a good analogy, since electronic computing started during WWII.

    [ Which reminds me, as a Win10 user, what's the best way of blocking, without compromising being able to read, say, newspaper sites?? ]


    As usual the US is different. IIRC, here, a "Not for Profit" means it, & both the board & the accounts must show, clearly where & to some extent how, any actual profits are distributed for benefit.


    the spread of Nazis See also my previous comment. No single fascist regime has ever lasted - they seem to auto-consume. The only exceptions are the past succession of Caudillos in various S & C American states, & even there, there are jerks at regime-change & also many of those regimes were externally supported. Given that history, & the known evils of such "orders" ... Will someone please explain why & how the meme resurfaces, since we know it does not & can not work? The same applies, of course to those regimes dominated by the religion of communism - we know it does not & can not work, but people still keep stupidly trying.

    Or did I answer my own question - or part 2 of it, anyway - I used the word "religion" didn't I ? Also known as: "But this time it will be different, because WE are in charge!" ( And "we" are pure & true & ... all the usual puritan fucking bullshit )


    I suspect that if all education were privately funded, the schools would be even more aimed at producing the corporate drones than now. Possibly There is the alternative, which partially happened here - I think it was squashed in 2014, for obvious reasons. Many "ultra-left" ( Please note the quotes ) so-called teachers quite deliberately lied about WW I to their pupils, claiming that the officers sent all their men to die, etc, when the numbers show that was the exact opposite of the truth - for instance. I can remember (so it must have been 55 or more years ago ) asking a history teacher - "If WW I was so horrible & all our generals so incompetent, how come we won?" ( And I didn't know then that the "Brit" army had the lowest per-capita injury/death rate of the major armies. ) I just got shouted at, of course. And what he didn't know was that 2 of my uncles had gone through that war, without a scratch - though the younger only just survived "the railway" in WW II.


    One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.


    OSX runs fine on PPC G5 kit — and G4 or G3 — as long as you don't want anything newer than 10.5.8, Leopard. For which there remain some supported applications (I believe there's a browser forked off Firefox, for example, that maintains reasonable currency with Firefox itself). See also the very shiny G4 Cube gathering dust on top of the bookcase behind me (it still booted happily last time I pulled it down and plugged everything in.) As it's an architecture that they decided to move away from in early 2005, it's a little hard to complain about them dropping backward compatibility — especially as there are at least two open source alternative OSs out there for those machines (Darwin and Linux).


    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero.

    Not in 1993 — or even 1996 — it wasn't.

    Remember, phone calls were billed by the connection (USA) or by duration (UK and elsewhere). So, minimum fee of about 6p to bring up a SLIP or PPP connection at 9600 baud to 56kb (depending on how fancy your modem was: home broadband did not exist). So between £2-£3 to download 2-10Mb of data (over an hour's connection). On top of that, if there's some sort of realtime billing per page download, you've got the Visa/Mastercard connection and transaction costs. Circa 1997-99 in the UK your payment service provider could do it over X.25 PSTN for a fee, if they had an X.25 line and bank approval for their connected terminal device (I remember jumping through flaming hoops to get certification for Datacash because I was the monkey who wrote the talk-to-the-banks-over-X.25 side of the service) but it was only profitable if we could charge the customers on the order of 50p per transaction. So some sort of account/microbilling setup was essential — see also the current fracas over Patreon's attempt to change their billing structure last month — or the users would hemorrhage cash at a rate of 60p to £3 per web session.

    The cost of bandwidth crashed spectacularly during the latter half of the 1990s, and today we think nothing of 100mbps of unmetered data into every home in a big city. But realistically, microbilling just doesn't mix with modem dial-up at late 1990s levels of usage and service.

    Source: I was at the W3C conferences, did contract work for Demon Internet and McAffee, wrote and supported Datacash's servers, had a ringside seat.


    No single fascist regime has ever lasted - they seem to auto-consume.

    I can point to two exceptions ...

    Per one foreign policy analyst, the reason everyone gets North Korea wrong is because they swallow the cold-war era doctrinal line that North Korea is a failed Communist state, when in fact it is best understood as a successful fascist dictatorship — if you look at what it does, rather than what it says, the cap fits perfectly.

    The other example is a bit more inchoate, but insofar as the modern state is a very bad fit for the former internal administrative zones of the Ottoman empire and preceding caliphates, the pan-Arabist Ba'ath movement was surprisingly long-lasting. Bits of it are still alive and kicking (the Syrian government faction, for example), and it took a succession of massive global upheavals (the end of the Cold War and US/Soviet support for the Ba'ath splinter states as proxies, Saddam's terribly unwise invasion of Kuwait and the long-term consequences including the Iraq invasion, then the global financial crisis, flight of capital into crop futures, and consequent food crisis in the Middle East that led to the Arab Spring). Ba'ath-ism was originally (1940s here) an anti-monarchist, post-colonial, secularising, modernising, westernising ideology: if the CIA and KGB hadn't got their claws in and started funding their respective proto-fascist strong men within the movement as a bulwark against their respective paper tigers, who knows where the Middle East might be today?


    Your reduction to “active v passive” has a problem, in that you’re situationally biased- you appear to have settled on Voice telephony is over 100 years old, so it must be passive as the natural order of things.

    Except... it isn’t. Go back a hundred-odd years, and there will undoubtedly be people claiming that we must defend against the adoption of “active technology” voice telephony and how its immediacy will destroy thoughtful communication, as done “properly” in passive-technology letter-writing.

    Then go back another few hundred, and see complaints that the printing press will ruin it all, and that Common-tongue translations of the Bible are a bad thing, and that sensible passive technology involves Latin bibles and a trained interlocutor... (Sir Thomas More apparently tried to buy up English translations, so that he could destroy them).


    SFReader @ 196: To clarify something you clearly seem to have made a category error about:

    • A sociopath is a person with antisocial personality disorder. They are personality disordered.
    • Personality disorders are not the same thing as autism spectrum disorders.
    • People on the autism spectrum tend to have problems with social interaction (my way of putting it, as a person who suspects they're on the autism spectrum, is that I speak "social" as a second language; some of us speak social as a language we've learned after being deaf from birth).
    • People with personality disorders understand social interaction in the same way as neurotypical people, and are often very adept at reading and interpreting the unspoken portions of social interactions. In many ways, this is the exact opposite of a person with an autism spectrum disorder.

    PS: "The Good Doctor", while providing a better depiction of autism than the standard "Rain Man" version (which depiction was actually based heavily on a person who wasn't autistic at all - the person Dustin Hoffman modelled his character on turned out to have other neuro-atypicalities, but autism wasn't among them) is not the be-all and end-all of accurate depictions, and still relies heavily on stereotypes. We're not likely to see an accurate depiction of autism on the big or small screen until people start recognising that if you've met one person with an autism spectrum disorder, you've met one person with an autism spectrum disorder.

    PPS: I'm on the autism spectrum myself. As people may have guessed from my comments...

    Pigeon @ 226: Actually, "seemingly" is often fairly accurate - that Bobbi the filing clerk is busy ensuring that two hundred and twenty-two copies of form 222B get carefully entered into the marketing database seems pretty pointless from where she's sitting. But to Mac the Marketing Manager, who uses that data to generate a report into the success or failure of the company's latest advertising/PR/greenwashing stratagem, Bobbi's task is actually pretty crucial, and Mac relies on Bobbi completing her task accurately and rapidly, to the best of her ability, even if he wouldn't be able to recognise Bobbi if he ran over her in the parking lot.

    Jocelyn Ireson-Paine @ 256: I've recently gone through one of my periodic fits of getting interested in Tumblr and Twitter for a bit. Gave it up because firstly, I was noticing I wasn't getting anything else done with my day, and secondly, I was starting to notice my depressive symptoms coming back for another round (right around Chrimble, last blinkin' thing I needed at the time). So I stopped playing with them (easy enough to do) and oddly enough two things happened: firstly, my amount of free time went up astronomically (and the housework still got done on time); and secondly, my mood improved. Which is why I'm busy carefully deleting all the lovely little notifications Mr Zuckerberg's pet marketing vacuum keeps sending me about connecting to people on the Boke of the Face.

    Pigeon @ 283: "I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed."

    The problem with this idea is simple: it breaks the education system as it stands at present. There's a lot of the education system which is very deliberately set up in order to use peer pressure and social coercion to elicit appropriate behaviours from students. When you say the education system should be teaching kids how to resist this, you're basically asking teachers to give classes in how to resist classroom discipline. Which is rather like asking politicians to vote for pay cuts (yeah, you can ask all you want, but don't think you're going to get much further than that).

    (You're also teaching kids how to resist their parents instructions. Now, while some parents would be right alongside this as a necessary part of children growing up, there are an awful lot of them who wouldn't be.)


    At least half agree about the Ba'ath - maybe.

    Disagree profoundly w.r.t. DPRK. I maintain that, in fact it is the perfect logical conclusion for a/any communist state, ruled by hereditaty communist God_Kings. Admittedly the latter is often a "Far Eastern" phenomenon, anyway, but think Stalin(ism) perpetuated hereditarily? In terms of people living under the boot-heel, of course there is almost no practical difference, as many people foiund out during WW II - if they lived so long.


    Re "Self preservation is not actually a value you can count on an AI to have. "

    While it's true that you can plug-in any values into an AI that can be encoded algorithmically, in the long run most AIs will have self-preservation. Once AIs are capable of fighting each other, they will need self preservation as a value and evolution will make sure that only AIs that are fit wrt. self-preservation will survive.


    Two things

  • This article is an interesting surface discussion on the niche Twitter occupies. This author believes that Twitter acts as a wire service, which is why it managed to survive while other social networks competing with Facebook withered.

  • North Korea. I've heard that the Kims rule N. Korea like the Joseon Dynasty ruled Korea as a whole for 500 years. For hundreds of years, Joseon was a Hermit Kingdom with a policy of autarcky.

    Does this sound familiar?

    "Political struggles were common between different factions of the scholar-officials. Purges frequently resulted in leading political figures being sent into exile or condemned to death."

    It seems that the God-kings and gulags targeting the common people were the Communist innovation to the system? Then again, I'm not familiar with that dynasty to comment further.


    You're half-right (and I'm speaking as someone who would be able to start with a bare IP Address and build a website from there.) The part where you're wrong is that a good ISP for 5-10 dollars a month gives you backup and restore capability, diagnosis of connection issues while you're sleeping or working, a range of services at the push of a button, a big fat pipe in case your site gets slashdotted, at least one backup circuit, tech support (if you need it,) software updates, and already-built integration with programming languages and databases - in short a good ISP dedicated to serving websites does a large amount of the grunt-level work and its probably worth paying for.

    Then there's the labor cost of actually building and maintaining a website. In my case doing so would cut into my capabilities to make extra money through overtime (I'm a field tech) and so I would need to know that the site was making some money... but is anyone doing advertising in a morally acceptable fashion? That's where I run into problems with the idea of making some money off the web. (In other words, moral advertising isn't just a problem for the consumer; it is also a problem for the website producer.) But sitting down and building a website costs me about 30-40/hour out of my other opportunities.

    So I've got to mostly disagree with you. What is the URL of the big complex site, full of information, with no advertising, which you maintain for nothing? Maybe you are doing just that. And maybe you don't have kids and you've got a great job and lots of free time... etc.


    Pigeon @ 283: "I have considered that immunising people against peer pressure should be a fundamental part of education for far longer than internet phones have existed."

    I’ll be blunt. This is total fantasy. It has nothing to do with the world as it currently is, would be extremely difficult and slow to pull off at scale (if not downright impossible) and would likely burn down current society in all sorts of ways in the process. Might as well try to make everyone love their neighbor

    I do agree that making the maximum effort to educate in how to think rationale is important however the success of this is always going to be marginal. There is too much biology, evolution and wetware bugs in the way


    It’s important to realize that “serving a web page” has about as much to do with what people are doing on the internet today as a horse and buggy does to a commercial jetliner

    Hardly any of current internet traffic or time is spent reading text

    People are mostly using the internet to substitute for actions and activities that used to take place in physical space, or consuming video and images (which we used to call TV)

    Similarity actually voice calling someone on a phone is Vanessa suing into obscurity to the point where many people can’t even hear their phones ring


    Oh my god! Really? People watch youtube, do banking, and shop online? I never could have imagined that on my own! You're sooooooooooo wise!


    No, you have misunderstood. It isn't "passive" because of its age, it is passive in the same sense that the air is passive when people are communicating by sound alone: it just carries the signal from source to destination, as instructed by said source and destination, without getting any ideas of its own. It is entirely neutral as to the content of the signal.

    It does not select for itself what the signal source is. It does not try and extract the information content of the signal in a form comprehensible to itself, sell the results to arsebook, and use them to decide what signals it will make available. It does not test receivers of the signal to determine what kinds of propaganda most successfully influence them and then preferentially transmit signals of those kinds. The distinction between "passive" and "active" is that active systems do do all those things, and more.


    Re 226: see the bit after the semicolon in my final paragraph of that post.

    Re 283: yes, I am aware of that problem. I don't have a solution. I don't, however, think that my own inability to come up with one means that nobody else could, nor that it isn't worth trying.


    I deliberately didn't include "labour costs" because they don't exist if you do it yourself. It may well be possible to imagine that someone might have given you money if you'd done something else instead, but (a) that is imaginary and (b) that it didn't happen doesn't make it a cost (zero is not a negative number). I could imagine being Bill Gates and getting a million bucks a minute or whatever, but that's not the same as it costing me a million bucks a minute not being him in reality.

    The site that eats enough bandwidth and storage for me to need to rent a rack server has been mentioned on here before after another poster found it, but I'd really rather not drag it up again; the reference is probably diggable-upable, along with my comments which probably explain why not...


    NK: yes, that's basically how he got away with it - he didn't make it a horrible place, he just allowed it to carry on being as horrible as it was already. SK started from the same point, and that has got better, although its human rights record still isn't really up to scratch.


    I've seen your assertion that no fascist state has ever lasted, but I think you have too narrow a definition of fascist. From Mussolini's definition the US has been a fascist state since shortly after the 1860's...possibly before, but I'm less familiar with that period.

    The thing is, the essence of fascism from Mussolini's point of view was the commercial interests working with the government to control the country and bind it together. That's why he chose the symbol of the faces, the bundle of sticks with an ax head sticking out bound together. The ax head symbolized the power of the government and the sticks the components.

    Now there are clearly lots of forms of fascism that are self destructive, but it doesn't seem to be any more inherently self destructive than any other form of human government. (They are all self destructive in the long term.) Fascism can peacefully coexist with socialism or capitalism or even theocracy. It's not a thing with only one form. It can even peacefully coexist with it's neighbors. But it's also not a complete specification of what the government is like. (If it were it couldn't appear in so many different forms.) And it amplifies the characteristics of those things that it co-exists with. This often reveals them as destructive in ways that were not obvious without the amplification.


    Re: '... personality disorders ...'

    Anti-social is not the same as ASD - noted and agree. However, using old-school definitions, esp. 'lack of affect', both were previously considered 'personality' disorders.

    My point (which apparently didn't come across) remains: given the appropriate support, almost every type of 'other' person can be integrated into society. An example that immediately springs to mind is: Robert Hare who developed the Hare Psychopathy Checklist tests high on his own scale. Fortunately for all concerned, Hare became a respected scientist and not a habitual criminal.

    ASD - only limited experience with ASD-diagnosed folks. However, given personal experience, I would much prefer knowing someone's ASD (or any very-different-from-the-norm mental/cognitive/physical) status up front. This also means knowing what that label encompasses. Makes working together much easier/smoother for all concerned.

    I've just started reading the PhD paper below that discusses this very idea wrt to literature/fiction. (BTW - When I searched the author, found that she's commented here.)


    Re: 'Once AIs are capable of fighting each other, they will need self preservation as a value and evolution will make sure that only AIs that are fit wrt. self-preservation will survive.'

    And if humans are necessary for AI survival, we might actually end up with a form of gov't that works, as in, makes us stop killing each other.

    If you extend AI self-preservation toward complete AI autonomy/independence from humans, at some point such an AI might have to develop non-biological subsystems that the AI could rely on to work unsupervised. And to be able to work unsupervised, this might mean making these subsystems more self-sufficient/autonomous or a very large and complex computer/AI ecology.

    Whichever path, still looks like it's turtles all the way down.


    And yet everyone is still talking like it’s 2010 and the root of all evil is internet advertising .

    Micro payments are actually here, the internet is hardly a utopia because of them and all these micropayment based companies are just as hungry for your data as anyone

    Data and it’s associated ability to psychologically manipulate is valuable to anyone who is monetizing consumers at scale. The means of Monetization (advertising , subscription, micro payments or some combination) is just a detail.


    peer pressure is kind of integral to us.

    If you phrase it as "social pressure" it is easier to understand just how integral.

    My mother quite reliably oscillated between "you don't have do that just because all the other kids are" and "you need to try to fit in". I'm not sure she could see the conflict between those two statements, but I certainly could (she's one of those people for whom merely being told something doesn't mean she has heard it).

    Peer pressure is, IMO, very largely the same thing as any other "someone telling me to do something" outside of the very specific places where there's explicit instruction-giving power. Viz, an armed, violent individual can issue whatever instructions they like and get compliance without needing social pressure, but a schoolteacher or manager who resorts to that outside the US is generally considered a failure.


    I would much prefer knowing someone's ASD (or any very-different-from-the-norm mental/cognitive/physical) status up front.

    Maybe make us wear a badge? Different badges for different ways of being different? And obviously you'd need to wear something to let people know they should reveal their badges, because not everyone wants to know and some people are unable to deal with knowing (hopelessly prejudiced in one way or another, in the literal sense of "prejudiced").

    Then there's the problem of who gets to decide who has to be tested, and how, in order to obtain the diagnosis/certifications of dissidence. Difference. Whatever.


    Same in the US. Not sure what you read into that other comment.

    The thing he was talking about was a non profit looking to switch to a for profit and how to deal with the assets at the time of conversion.


    See also the very shiny G4 Cube gathering dust on top of the bookcase behind me (it still booted happily last time I pulled it down and plugged everything in.)

    One thing some are doing is putting MacMini CPUs into such and using them with current software. I have a 15" snowball I'd like to do such to but round2its are in very short supply around here.


    And maybe you don't have kids and you've got a great job and lots of free time... etc.

    And for how long of a commitment? 1 year. 20 years?


    old hardware that can't run newer OSs (like the original Core Duo iteration of the 2010 Macbook Air, or my mother's 2008 iMac[*]) isn't getting security patches even though it's still in use.

    MacTracker shows 2010 MacBook Air will run the latest OS X. Maybe it has too little ram to run it at an acceptable speed?

    And your mom's iMac is 2 years too old for the current OS X. But it will run 10.11.x and with these current issues they MIGHT release a patch for it. The other comment by SEF not withstanding.


    Paying for websites is a concept that I have no sympathy with because the cost is between trivial and zero.

    As others have mentioned; Seriously???

    This web site is the simplest I visit and it's not free for CS to run. Especially when you consider his time. Oh that's right you refuse to consider someone's time as a cost. Even if it is not replaced with money earning, for many of us it does interfere with things like raising kids, being with our spouse or significant other, social interactions, etc...

    I don't know of anyone who puts up a fully static site with hand coded HTML that very many people visit. I'm sure they are out there but not in any great numbers.

    I work with another blog where we try and keep expenses down. Over 2000 posts, over 300,000 comments, and 4000 unique visitors per typical day. It is a full time "job" for the owner and she fully understands that it keeps her from earning a living in a more profitable way but does it as a labor of love. It has a cost and she's willing to put up with it. In addition to the $2K per year in fees to host and maintain it. On a deliberately shoe string budget.


    But others who I do trust to understand say that it, Meltdown, can dump you entire computer memory (slowly, admittedly).

    Very slowing in computer terms. To the extend you'll never get anything near a snapshot. Just a bunch of very small snippets that might yield useful information.


    I forgot about this comment.

    "The lead-damaged generation in the US is 45-65 years old right now. This is also the age range of maximum political power, similarly the upper management of most organizations is in that range."

    By capping the upper limit at 65, you're underestimating the power the "lead-damaged generations" wield. I can't comment on the UK, but the real upper limit in the US is 78 (the median life expectancy). The rise in the over-65's has been very instrumental in the rise of the alt-right in the US.

    The UK and UK hate-speech laws are different, so I won't comment on the UK situation. In the US, hate-speech laws are very weak (see most Republican and alt-right politicians and media stars). Most hate-speech was traditionally policed by companies, not governments. In other words, people were afraid of saying something TOO offensive because they risked getting fired, not because they'd get fined or arrested (you really have to make an effort to get arrested for hate speech). With more people retiring, that restriction is lifted; now there are far fewer consequences in saying something offensive.

    With the portion of over-65s rising, expect this to normalize far more offensive speech. We've already seen this over the past decade; it will continue.

    For those familiar with UK, Canadian, or Australian laws, how is this dynamic playing out in your countries?

    “One of my friends is a retired USAF "rocket scientist". Every time the conversation turns to Mac vs PC, he goes off on an extended rant about why Mac OS X won't run on his Power Mac G5.”


    OSX runs fine on PPC G5 kit — and G4 or G3 — as long as you don't want anything newer than 10.5.8, Leopard. For which there remain some supported applications (I believe there's a browser forked off Firefox, for example, that maintains reasonable currency with Firefox itself).

    My guess is he wants to run the "cloud" version of Adobe Lightroom and it requires a more recent version OSX than what he can run on the PPC G5.

    We belong to the same photography club. I'm not a Lightroom user, but the Mac/PC conversation that sets him off usually starts out as a group discussion of relative merits of Photoshop vs Lightroom and Stand-alone vs Cloud.

    FWIW, I'm also guessing he has a newer Mac, but pines for that PPC G5.


    I don't know of anyone who puts up a fully static site with hand coded HTML that very many people visit. I'm sure they are out there but not in any great numbers.

    I am in the process of converting one of my sites to static HTML and it is distressingly hand-intensive because I'm working off a backup of a corrupted PHP site using an HTML grab of the site for assistance. But there are at least three Wordpress and PHP exploits in the code so every single page has to be looked at (I have already run a few heuristic scans but sadly one of the attacks left CSS tags and some short links around).

    Most of my personal sites are hand-coded, I generate text/edit photos and use an editor to munge them into shape. But I've discovered after the last round of nonsense that the stuff that's been up longest isn't used much if at all, so I have more or less taken my sites offline for a couple of years while I grind through making them static.

    It is a lot of work, and maintaining them is also work. Even with good design, having to use bash and sed etc to make site-wide edits is a PITA. But if I don't, the navigation links break or become annoyingly out of date.


    DIsagree ... If only because "Fascism" is incompatible with "Democracy" & by the prevailing standards of the times, the US was at least partially, if not wholly "democratic" until very recently. [ Please note the careful postioning of the quote-marks ]


    (specifically, indymedia Australia seems to be basically dead and they've taken their archive offline, and since they were the largest single source of traffic that I felt an obligation to, "their" content can come down without problems. Likewise mozbike, both archive copies combined get less than 10 unique visitors a week).


    Google "Toby Young" - a truly offensive piece of work & considerably younger than me. Or - dare I mention the name - Farage? Plumbum-damage, here we come.


    ... Evolution only applies to imperfect replicators. Selection over natural variation, remember.

    If we build AIs that qualify, we are both going to die, and have it coming for being too stupid to live. Seriously, I am not that panicked about ai safety, but letting the driving force setting the value systems of machine intelligence's be "Maximum number of extant descendants" pretty much guarantees skynet.


    With more people retiring, that restriction is lifted; now there are far fewer consequences in saying something offensive.

    With the portion of over-65s rising, expect this to normalize far more offensive speech. We've already seen this over the past decade; it will continue.

    With respect to the US, that is starting to be seen as retired people with more or less secure financial positions become willing to talk about classified things they learned in and around the government. To a large extent, enforcement of secrecy depended on the ability of the gummint and its contractors to deprive offenders of a livelihood, and now that's fading for the elder cohort. The threat of legal action is there, but happens very rarely.


    Re: 'Maybe make us wear a badge?'

    No badges - was thinking something along the lines of people not staring or being stared when they whip out their glasses to have a closer/clearer look at something. The point is to make what might be currently perceived as 'odd' differences into 'normal' differences.


    Since we're into 300+ comments....

    Too bad we couldn't work out a temperature trade last week with Sydney, AU. Above freezing for only a few hours of the last 10 days or so here in Raleigh where a typical winter has a few days below freezing in February.

    This morning it was 7F/-14C while Sydney had 117F/42C. For us and them this is crazy.

    And now I get to replace an outside faucet that froze solid. And got to go under the house Friday to cut and cap the line before the thaw got here.

    Oh, well. The joys of owning a 56 year old house.


    I think that social interaction differences are much harder to quantify and describe than relatively straightforward glasses and walking stick stuff. Not least because where's the gap between "Trump supporter" and "cognitively impaired"?

    For me it brings up the impossibility of using labels to enable others to accurately understand you. Labels are not always accurate and there are far too many of them, and which ones matter are not only situational, they change with the situation.

    For example when you first meet me it might be more useful to know that I avoid eye contact because it makes me uncomfortable rather than because I'm lying when I tell you my name. But two seconds later you'll be wondering "why is he still going on about eye contact, is he unable to perceive my boredom?". I used to say "I like boxes, there are so many to choose from" in response to people who disliked being pigeonholed. That's still accurate, but no longer quite as funny when people are increasingly wanting to know all the boxes up front. Shonia Laing's "I'm a white colonial middle-class anarchist" gives you some idea of that problem (it's a song title, song is also amusing but sadly not on the internet (blocked on utube)).

    There is a somewhat amusing SF short story about someone who has had "anger management issues" tattooed on their forehead as a warning to others that springs to mind.


    Gah, I got bit by the timeout and lost my long-winded reply.

    Summary: both lots of weather are normal, though.

    Also, living in the heat is a fairly simple adaption, which I wrote about a bit (and so did Andy) if you want to read it.

    Sydney tends to cool off at night, Melbourne not so much. Getting through 35 degree nights is much harder than 45 degree days and Melbourne is going to see more of the hot nights than Sydney. People die more of nights, and one nasty Australian adaption is "yeah, sucks to be them {shrug}".


    I can handle the heat. Where I grew up 5 to 30 days over 90F/32C were normal in the summer. And a few over 100F/38C was not all that odd.

    And we had a bit of humidity. It was where the Mississippi, Ohio, Tennessee, and Cumberland River all merge together. Well also the Clarks River but it was almost trivial compared to the rest. And the later two big rivers are dammed there into massive lakes.

    But I moved to Raleigh to get away from the cold except for a week or few a year. This winter is just brutal. For most of the eastern US.


    I'm disappointed about your argument that the Singularity is ruled out because it quacks like a religion. Should we also discount the possibility of nuclear apocalypse because it is superficially similar to the Christian apocalypse? At least you do follow this slander by a real argument (that nobody even wants to develop human-like self-directed AI); maybe you could have focussed on that instead?

    This reminds me of a well-known physicist that told me he rejects the Many-Worlds interpretation of quantum mechanics because he is an atheist. The argument starts from the premise that quantum states are not objective states or nature, but rather points of view of an external observer (as postulated in the Copenhagen interpretation). But now Many-Worlds claims that there is a quantum state for the whole universe. Which external observer could assign it? Clearly just the Christian god. Therefore Many-Worlds implies the existence of God, and since he is an atheist, he rejects Many-Worlds.


    Ideas always have lineages, and those lineages have baggage associated with them. The Singularity idea is christian eschatology stripped of its mystical elements. That by itself does not tell us anything about the validity of the idea itself, but it does inform the mindset of people approaching it. As Charlie pointed out, this lineage and baggage resulted in Roko's Basilisk, which reintroduces a concept of sin and absolution into the whole thing that it didn't really need; It also informs the approach people take to making the singularity happen (Ask yourself: Is there a difference in terms of motivation between christian millenialists who want to create the preconditions for the biblical end of the world, and singularitarians working to bring about self-improving AI?).

    The thing to keep in mind here is that the singularitarian mindset is indistinguishable from a deeply religious one. Both exhibit a certain resistance to evidence that runs counter to whatever their foundational texts say, both have a giant "and then magic happens" between our present now and whatever future they envision.


    Ideas always have lineages, and those lineages have baggage associated with them. The nuclear apocalypse idea is christian eschatology stripped of its mystical elements. That by itself does not tell us anything about the validity of the idea itself, but it does inform the mindset of people approaching it. This lineage and baggage resulted in MAD, which reintroduces a concept of sin and punishment into the whole thing that it didn't really need; It also informs the approach people take to making the nuclear apocalypse happen (Ask yourself: Is there a difference in terms of motivation between christian millenialists who want to create the preconditions for the biblical end of the world, and generals trying to start nuclear war?).

    The thing to keep in mind here is that the nuclear war is indistinguishable from a deeply religious one. Both exhibit a certain resistance to evidence that runs counter to whatever their foundational texts say, both have a giant "and then magic happens" between our present now and whatever future they envision.


    "Use search and replace on a post I disagree with" is not the effective rhetorical strategy you think it is.


    It's all very Futuretrack 5 - if you remember the Robert Westall book.


    This web site is the simplest I visit and it's not free for CS to run. Especially when you consider his time.

    Recurring costs of this site: around £990/year for hosting (it's a full scale colo box, not a VM) plus another £20-40 for domain registration (multiple domains) and maybe £100 in sysadmin fees (I'm cheap).

    But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs.

    I run it as a loss leader/marketing exercise, but if I was doing it as a revenue earner I'd have to sell a shitload of ads or subscriptions to keep it going. (Say, a thousand Patreons at £10/year ...)


    Incidentally, the not-insignificant running costs are part of the reason for the relatively firm moderation policy and my draconian approach to drive-by and flaming attacks on myself; I'm not paying good money to put up with abuse, and if you want the service to continue you should not spit on the guy providing it.

    This is distinct from the level of moderation you'll find in the comments on more formally funded media outlets, where letting the readers vent is kind of expected these days because it maximizes return visits (hence advert delivery metrics), but nobody is personally invested in the content.


    It clearly isn't. I was trying to make you address my point that exactly the same argument can be applied to the idea of nuclear apocalypse but failed.

    More generally, it annoys me to no end to see atheists treating Christian mythology as something relevant. It isn't. Please let this meme die. It is much more interesting to examine ideas on their own right than to contrast them to a 2000-years old doomsday cult from the Middle East. It leads to quite ridiculous situations, such as the anecdote I told about the physicist using religion to argue about quantum mechanics.


    The dominant belief system of a sizable portion of the world's population is hardly irrelevant. Examining ideas "on their own right" without considering their historical and philosophical underpinnings is a really terrible idea.


    The dominant belief system of a sizeable portion of the world's population is completely irrelevant to whether exponentially self-improving AI is possible or not. It is also completely irrelevant to which interpretation of quantum mechanics best describes reality. It is completely irrelevant to any scientific question that is not related to human psychology.


    The thing is, if christianity is entirely irrelevant to the question of whether or not self-improving AI is possible and, if it is, what impact its introduction would have on the world, why are there so many parallels between it and the mindset of the people who think that the singularity is a real possibility? And given that these similarities exist, what does that tell us?

    We know that christianity got a lot of things wrong. We know that its idea of an apocalypse is bad and ridiculous and has led a lot of people to making really really bad decisions over the years. So now that those ideas are making a comeback amongst folks who define themselves at least partially by their rejection of the mystical claptrap of christianity, does that mean that the ideas are suddenly more credible? I don't think they are; I think that if self-improving AI ever happens, it will bear little to no resemblance to whatever singularitarians are imagining, and that all the predictions and hopes these people have will make it harder to deal with the reality of the thing as opposed to the concept of it.


    Re:target and pregnancy. That story appears to be a myth based on a hypothetical:

    Where a journalist was proposing a scenario that never actually happened.

    Also another example for the pile re: social media optimizing for emotion.

    "So let’s talk about Tumblr.

    Tumblr’s interface doesn’t allow you to comment on other people’s posts, per se. Instead, it lets you reblog them with your own commentary added. So if you want to tell someone they’re an idiot, your only option is to reblog their entire post to all your friends with the message “you are an idiot” below it.

    Whoever invented this system either didn’t understand memetics, or understood memetics much too well.

    What happens is – someone makes a statement which is controversial by Tumblr standards, like “Protect Doctor Who fans from kitten pic sharers at all costs.” A kitten pic sharer sees the statement, sees red, and reblogs it to her followers with a series of invectives against Doctor Who fans. Since kitten pic sharers cluster together in the social network, soon every kitten pic sharer has seen the insult against kitten pic sharer – as they all feel the need to add their defensive commentary to it, soon all of them are seeing it from ten different directions. The angry invectives get back to the Doctor Who fans, and now they feel deeply offended, so they reblog it among themselves with even more condemnations of the kitten pic sharers, who now not only did whatever inspired the enmity in the first place, but have inspired extra hostility because their hateful invectives are right there on the post for everyone to see."

    That kind of optimisation-for-outrage-eyeballs we see on many social media platforms has some weird side effects.

    The big weird one being that the platforms transparently optimize for really crappy supporting stories for your own side that are just barely good enough for you to reblog them.

    Are you outraged by police killing people? your facebook feed will only very rarely show you clear-cut stories like a sleeping 7 year old girl shot in the head by cops who got the wrong address with it all caught on badge-cam.

    You might occasionally see such a story... but 100x as much it will show you the other kind.

    And this may not be an artifact of source data.

    Instead it will show you a story where some guy picked a fistfight with a cop and he'd just robbed someone down the street but the cop didn't have anyone with him but some witnesses disagree on details and... etc etc etc

    Such that no matter what side you're on you'll reblog outraged that ANYONE is taking the other side on the case. and then they see your reblog and respond. If it was a super-clearcut story they might not respond, they might just shrug and say "ya, that is pretty fucked up" but it's not, it's just crappy enough that they see the details that support their own side. So it's almost always crappy stories.

    Hate feeding off hate all to boost the number of eyeballs looking until everyone involved hates everyone else involved more than life itself.


    But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs.

    Yay, someone pricing their time correctly!

    Bear in mind a lot of patches and development work on Linux is contributed by the likes of IBM.

    Speaking of which, the IBM POWER series processors are also vulnerable to Spectre/Meltdown, or at least an attack close enough scare IBM into patching the firmware on the POWER 7+/8/9 series, and they're looking at earlier.

    Which means the only major processor line out there that hasn't been discovered to be vulnerable are SPARC, and to be honest, I'm thinking the word "yet" needs to be there, because I'm willing to bet a similar attack to Spectre is possible.

    Speaking of which, the IBM POWER series processors are also vulnerable to Spectre/Meltdown, or at least an attack close enough scare IBM into patching the firmware on the POWER 7+/8/9 series, and they're looking at earlier.

    For an amusing war story regarding speculative execution in the PowerPC CPU on the XBOX 360, see


    The dominant belief system of a sizeable portion of the world's population is completely irrelevant to whether exponentially self-improving AI is possible or not.

    Correct, up to a point.

    However, my point is that the people who are interested in exponentially self-improving AI were mostly raised and socialized in a predominantly Christian-derived culture that predisposes them to see certain patterns. The whole idea of intellect separate from body, for example, is classic Cartesian dualism, which in turn was derived from Descarte's deeply-held Christianity (and builds on ideas implicit in a cluster of religions that surfaced in the Middle East in the early iron age). Sudden high speed change? That's another thing that apocalyptic mystery cults are keen on. Mind uploading? See also, souls/afterlife beliefs.

    As human beings we're pattern-matching organisms because pattern recognition is a vital survival trait, selected for by evolution over half a billion years. We also look for eschatological templates into which to slot our new observations and beliefs because it helps us deal with new data if we have an existing precedent. So when I see folks wandering off into exponential self-improvement, mind uploading, et al, what I note is that they're unconsciously not bothering to explore the gaps in the picture, the possibilities for AI that don't stick to the pre-existing map.


    If your point is that singularitarians develop conceptual blind spots because of their upbringing in a Christian-derived culture, and that future developments that fall in these conceptual blind spots are likely to be ignored, then I agree with you. This seems to me a good source of plausible yet hitherto ignored ideas.

    You seem, however, to be arguing for something stronger than that: that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul. I think this is as unacceptable as rejecting the Big Bang theory because it was proposed by a catholic priest trying to fit biblical creation into cosmology.


    I'm not denying that the idea of a Singularity can be traced back to Christian eschatology. I'm denying that this is a valid argument against it, it is just an ad hominem fallacy.


    Those are some mighty mighty broad brushes you're using there.

    It's like the claim that there's only N stories in the world where N is small... if you simplify it to the level where the story description is "bad thing happens, protagonist overcomes bad thing"

    If someone wrote a story with flip phones in 1970 "they're just recycling star trek" would not actually be a criticism that says anything very much about the likelyhood of flip phones.

    Little devices you could carry in a pocket and use to talk to other people were something people like the idea of. That matters when some of those people are engineers and material scientists capable of taking baby steps towards actually implementing it.

    If you've got something that looks like it should one day be physically possible and we've not discovered any solid reason that it will be impossible... the fact that lots of people want it to exist one day because [insert cultural/social/religious/any other reason here].... if anything that makes it much much more likely to actually get implemented at some point.

    If I turned up 70 years ago proposing Daz fabric whitener someone pointing out that it's just a re-hash of some religious belief about purity and purification wouldn't actually be telling us anything useful re: it's practicality to implement.

    I'm pretty sure this is a case of "reversing stupidity is not intelligence". Putting a "not-" before random cultural/religious beliefs does not lead us to correct answers more reliably than a coin flip. Pointing out that something losely matches some belief about the trinity or bugs bunny doesn't tell us much about it's real world practicality.


    Given the attention drawn to the quote marks, I'll agree with you. If you'd just put them there, and not drawn attention to them, I'd have disagreed. Well, except that you said "by the standards of the time" which I would count as drawing attention to them.

    This is a real problem, but it's important to remember that democracy doesn't work in groups of larger than around 50-150 people. "Representative Democracy" is a very different beast than actual democracy. Charlie's speech does a good job of drawing attention to some of the (obvious?) drawbacks of representative democracy, but it's important to remember that the founders of the US Constitution didn't want a democracy. They didn't trust "the people". Senators were appointed by the governors of states, and various other tactics were used to preserve the power of the currently rich, while making it look like the power lay with the people. And the problem is, I don't totally disagree with them. Democracy doesn't scale.


    All replicators are inherently imperfect replicators. Perfection does not exist in the physical universe. (Possible exceptions: quantum states may possibly be perfectly equivalent, electrons may be perfectly identical to other electrons, etc.)

    OTOH, it might be possible to have replicators that would essentially only have identical copies or "dead" copies. But do note that "essentially". Still, the DNA code has evolved to be relatively tolerant of single bit errors. A designed code would not necessarily have that feature. But you couldn't make one that worked which wouldn't be tolerant of some multi-bit errors, and each degree of demanded perfection will increase the benefit of mutations away from that code.

    Also there are lots of benefits for being tolerant of single bit errors. So the best thing to do is probably make any errors as unlikely as possible (the DNA process has a self-correcting mechanism that usually works), and try to design things so that the rewards for variation are either minimal or beneficial also to humans. This won't work forever, but it may work "long enough"...a few million years might be doable, and if we start spreading outside the solar system isolation would keep any single malign mutation from killing us off. (For that matter, if we don't get AI rulers, we're likely to kill ourselves off within the century, so given that trade-off...)


    All human ideas have a history of similar ideas. The Singularity is one of them. Ideas with a similar structure tend to aggregate other ideas that fit with them. But parts of the aggregate can be true or false without implying anything about other parts of the aggregate.

    I tend to think of myself as a "believer" in the Technological Singularity, but I sure don't think there is reasonable grounds for a lot of the beliefs that tend to attach to that. And the more extreme predictions are nearly certainly invalid. The thing is, there are many possible way to end up at "the Singularity" and the use of the word "the" there is an invalid implication. A singularity awaits us in or future, and we don't know what kind it will be. An all-out atomic war would be a singularity. A self-improving AI is a raft of different singularities, depending on the motivational structure, etc. A 90% fatal pandemic would be a singularity. A giant meteor impact would be a singularity. Etc.

    That people won't design an AI that matches a human in capabilities an motivation is essentially guaranteed. The first really competent General AI will necessarily be built before we have the competence to give it a human motivational structure, even if we wanted to. This doesn't say it won't have essentially super-human powers. (As Charlie pointed out, ordinary corporations do that.) But it means that it will have goals that we don't, perhaps can't, understand. It will, of course, be designed to explain itself in comprehensible terms, but that very design means it will be designed to lie. And if no catastrophe wipes us out first, that kind of AI is now inevitable. I'm still guessing that it will show up before 2035, since we seem to be clearly in the ramp-up leading that way. What I don't know is what agency will be building it. It makes a big difference if it develops out of a hospital administrator or an automated Pentagon, or possibly out of a automated airplane or space-ship designer. But it will be some major project, and my guess is that it will evolve out of replacing middle management. (But that's a wild guess, not an assertion.)


    My personal estimate of mind uploading:

    Yeah, it's possible. But it's so difficult that it won't happen until well after a successful transition through the technological singularity. I expect that even reasonably good models of individual minds (i.e. simulations) are lots easier. (And by reasonably good I mean good enough so that no human could tell the difference.)

    FWIW, I doubt that uploads will ever happen, though simulations may well. Not because it's impossible, but because it's too difficult. Whether it would also inevitably involve killing the person being uploaded I remain unsure about.


    "that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul."

    It's also a somewhat dubious derivation in that it overlooks a much more obvious source: the idea of software and hardware. That is a matter of everyday experience in a way that Descartes is not, particularly since the pool of people who like those kind of stories includes a disproportionate number of computer geeks. When the idea of the nature of the hardware being of much lesser importance than what software it is running comes into everything you do, making the same conceptual distinction in the case of body and mind is not a deep philosophical insight, but a straightforward case of state-the-bleeding-obvious, and the idea of taking a snapshot of your mind state and installing it on arbitrary hardware is no more than an engineering problem from the time you first have it. Heck, the very term "uploading" is taken directly from computery.

    It's also a much closer match. That mention of Descartes often comes with the attached tag "...but he was wrong", and AIUI the answer to how people can still get away with saying this without everyone going "don't be silly" and waving computers in their face is that what Descartes actually said is in fact mostly orthogonal to the SF/computer concept that gets called by the same name.

    Then, there is no well-defined "Christian concept of a soul"; you can get definitions ranging from the most concrete to the most nebulous ends of the scale, depending who you ask. The concept meant by that phrase probably owes more to the idea of a "ghost", which (like the word) is Germanic and pre-Christian. There are also plenty of other concepts along the same general lines, many of which have nothing to do with either Europe or the Middle East. Refusal to accept death at face value is after all a very ancient and popular mode of thought.


    Y'know, Charlie, there's a bigger picture here: first, the really, really big companies have a much longer lifespan. IBM's over a century old, now, for example, and let's not mention railroads in the US.

    That being said, corporations are different than AI, because a) they're run by multiple entities (humans), and, even bigger, b) some of these humans are on the boards of multiple corporations. THAT brings the knowledge and attitudes to multiple corps... and, if they're effective/rich/powerful enough, spreads by contagion. And others try to emulate that (similarity).

    And as I type this, I guess that means they're magickal entities, not AI, with their own agendas... and we know most, if not all, want more power and control.


    EC wrote: "Nobody in 2007 was expecting a Nazi revival in 2017, right?"

    You mean you didn't? Seriously. The signs were all there well by 1997, let alone 2007 :-(

    Um, agreement. Back then, in the early oughts, I was saying that Clinton had been building bridges (some) to the 21st Century, while the Shrub was building bridges to the 1930s.


    In fact, I've met the "free-range mom". She's currently running for county council here in Montgomery Co, MD, USA (and I support her).

    Of course, I was taking city transit in Philly started with 5th grade.


    That was because they started making US cars cheaper and crappier by the seventies. Before, say, 1966 (I used to hear arguments about earlier), this make them cheaper, so they'll keep buying a new one every two-three years came in.


    Really: Everyone I knew, or have met who was over about 11 was paying attention. My late wife, 1954, told me she had, and my new ladyfriend, '58, told me she remembered.



    "ESR"? I can see my old, ole acquaintance, ESR, very well known Libertarian, being more than slightly unhappy with that....


    Oddly enough, I can picture that.

    Perhaps it's because I'm in the middle of rereading an old sf novel, called, um, Saturn's Children, by some non-American SF author....


    "And the problem is, I don't totally disagree with them. Democracy doesn't scale."

    I don't either, but the problem is who custards the custard. Since there isn't a guaranteed supply pool of Vetinaris to draw from, you have to select governors in some way from the population at large; and since arseholism is a chaotic function, there is no way to define a partition of the population that separates arseholes and potential governors. Arseholes in government are therefore inevitable, and the best you can hope for is a system to periodically replace them with different arseholes, sufficiently often to keep everyone happy, that doesn't involve anyone getting shot.

    The difficulty with maintaining such a system is that any group of arseholes will always try and persist, so something has to prevent the glue setting faster than other people can notice and scrape it off. And Charlie's article is all about accelerants.


    "But in terms of time it probably costs me the equivalent of 20% of my writing revenue, so probably pushing over £10,000 a year in opportunity costs."

    But don't you get some return in the form of story ideas or least the ability to bounce ideas off of the general reading public to help guide you writing? If so it seems like time well spent.


    I am reminded more of biological entities; bees being one obvious one, and also things like bacteria swapping genes with each other on USB sticks, or how the concept of "individuals" doesn't really make sense with a lot of fungi.


    You might want to foward through the video until you get to hear how I say that sentence.

    Hint: trying for deadpan.


    I was taking the city bus to school in grade 5 too. Two busses with a transfer (very indirect route). Or in good weather I could ride my bike.

    Nowadays it wouldn't be allowed — enforced through anonymous complaints to the authorities.


    OK. Sorry about misunderstanding you. I don't listen to video, because it's stressful and error-prone!


    Please reread what I wrote. A lie is defined as representing something you know to be false as true. Fiction says, up front, this is not true. It is therefore not lies.


    Not exactly. As I read about it, the o/s providers are doing software mitigation, so that the kernel page tables (kernel memory), and the userspace page tables are separate, and the programs will need to look at two separate tables, rather than one.

    I also read that the hit varies - browsing, you won't notice much, but I've seen an estimate that postresql will take a 17% hit.


    I will note that I started hearing about Pottering and systemd around the time I read that M$ bought 20% of RedHat.


    Re Linux's "death grip" - it's a couple of years now, I think, that I read that > 55% of the entire Web was running on Linux.

    Oh, and RH dues actual support for 10 years, though the last four is security and bugfixes,


    Well, yes, I would like to walk into a store and see and check out what I wanted. The stores, even big box, carry less and less, because it's online.

    Will not make the mistake of buying boots online again.


    The singularity is total crap. Firstly, no form of computable growth (let alone mere exponential) leads to a singularity. Secondly, there are known hard boundaries, of which the best-known is the Turing-Goedel one. And you can safely ignore any of the drivel spouted about NP completeness. And, yes, it's a modern version of the belief of the end times.

    It is POSSIBLE that different computational models do not have a Turing/Goedel limit, but that's not the way the smart minds think. Quantum computing is hyped up to be a strictly more powerful model, but I have seen no evidence that it is (in the sense of what can be computed). Computing with (true) real numbers is, but is not realisable. I know that people have worked on this, but nobody has ever found one that is both strictly more powerful and realisable.

    There are more powerful, realisable models in the sense that they can solve problems in a reasonable time that are intractable in the simple Von Neumann model, but they can't actually calculate anything that the latter can't. If anyone delivers a working quantum computer, it may be one such, probably for a VERY limited set of problems.


    Oh, for the Good Old Days of "fair use". I was there for Canter and Siegel, when their ISP's servers crashed three times in the first 24 hours after the Green Card spam, under the weight of incoming complaints of violation of Fair Use.


    Oddly reminiscent of the John Wyndham story....

    "It is a wine of virtuous powers; My mother made it of wild flowers."


    I don't think "fair use" means what you think it does. And the complaints were not for violation of "Fair Use." And C&S were late-comers anyway, I remember

    As I recall, I've personally met the first couple of arpanet spammers.


    atheists treating Christian mythology as something relevant. It isn't. Oh dear NOT EVEN WRONG Both christian & muslim mythology are very highly relevant - if only because they - & their believing followers - can KILL YOU & maim you & torture your entire family & totally fuck-over the planet ( again ) Grrrr .... Sorry about the rant, folks, but talk about missing the point. Yes - WE KNOW it's a load of foetid dingoes kidneys, but they don't, & they have far too much power.


    Walking to & from infants' school at age 5 - about a half-mile each way, 4 times a day. Yeah Wonder why ( along with cycling 5 miles a day from age 11 ) that Team games & spurts were totally unnecssary to keep me fit & lean?


    You clearly don't know what you're talking about. First of all, singularity is not meant in the literal diving-by-zero sense, but in the metaphorical physics-loses-predictability-when-hitting-singularities sense.

    Secondly, there is no speculation that quantum computers can solve uncomputable problems. This should be obvious from the fact that one can simulate quantum circuits on a classical computer, and was proven by Deutsch in the very first paper that defined what a quantum computer is.

    But this is anyway beside the point: one doesn't need to go outside computable functions, or even outside BPP, to get an AI that is much more intelligent than us. Keep in mind that AlphaZero runs on a vanilla classical computer.


    I do not know about steered beams, but I do know that on a cell tower there are usually several antennas, each of which are aimed to cover only a part of the tower's surroundings. I think this is due to transmitter design - it is easier/cheaper to manufacture antennae that have good coverage in in a 90-120 degree arc (and hook 3 of those up) than it is to get an omni (360 degree arc). That said, I am not aware if any base stations are aware of how far a cell phone is from them. Even if they are aware, I do not think this information is being sent to the provider's server park.

    I guess one of the saving graces (for now) is that streaming and storing everyone's more-or-less accurate location constantly is something mobile providers have to actually work to have. So hopefully, it's not just a switch to be toggled and a cable to be hooked up to some simple box. Realistically, it of course already exists and is only waiting to be found due to some leak or governmental fuck up. :)


    The Modern Western concept of self (or person, or individual - with the complete set of additional concepts we Modern Westerners attach to it) is not universal to human experience. There are notable differences both with extant non-Western cultures and with historical Western cultures. This observation is not new or even vaguely controversial and this lecture by Marcel Mauss from the 1920s expresses just some of the empirical base from which it is made.

    This is not specifically an argument against the possibility of "uploading" and "individual" human mind, but it means that some of the concepts relating to what that would even mean are actually only assumed to be valid and have not been demonstrated. One of these is that there is such a thing as "mind" in the first place.

    It's not some kind of "lefty PC gibberish" to claim Cartesian mind-body dualism is not an accurate representation of the human condition. The problem is with people treating the assumptions of their historical and cultural context as though they are candidate null hypotheses, because... well everyone knows that's true, right? We live in a historical and cultural context that is shaped by its legacy, replete with assumptions that are rooted in dualism, in various religious traditions and in post-Enlightenment Modernity with its various tropes and memes (the PWE for instance).

    Note that drawing a software versus hardware metaphor doesn't really help here. It breaks if you explore it further (eg try to map short term versus long term memory to volatile system memory versus mass storage*). It is itself rooted in a context with underlying cartesian assumptions, so many people "get" the hardware versus software distinction be implicitly or even explicitly comparing it to mind versus body anyway. It's not a very accurate model for how the squishware actually works, either.

    What are the implications of "uploading" if there is no such thing as "mind"? It means there's no structured data to "read" from your brain. We'd never have a relational model of the information in someone's head, we'd never be able express your thoughts and feelings in XML or JSON. But perhaps we could store physical state as a form of raw data. If there's no such thing as mind, then what is the locus of the human experience? How important is embodiment? How important are the various nervous systems, internal symbionts, intestinal bacteria? Can they be included in the model, or at least in the raw data? Is "uploading" then a matter of simulating all the molecules involved along with the chemical, eletrostatic and electrodynamic, physiological and other processes attending? Like a software emulator, but a lot more complex?

    Doing that would require a very large processing capability in conventional terms. It might not be possible to make it work exactly the way that a physical instance of the wetware would work, since it involves simulating the various states required, including state transitions that are subject to uncertainty. Would molecular level be enough?


    "You clearly don't know what you're talking about."

    Not necessarily about you, but people inside the cult looking out always seem to say that. On the other hand, your interpretation of what others are saying here is flawed.


    I happen to degree, but the real problem is that most people seem to think uploading means "you" "wake up" "inside a computer" (sometimes after "you" die, which is also a process, not an event).

    Unfortunately for all of us, there's a much simpler way to "upload" a person, at least their online personality. All you have to do is train a computer to respond as you do. It's sort of a cheating variation of the Turing Test*--if no one can tell the difference between you and your online doppelganger, is it you or not? Note that we're not saying that it's "you," only that, unless someone is watching your lips move in real life, they can't tell whether it's you or that computer.

    Unfortunate? IIRC, Google took out a patent on this technology most of a decade ago. If I had to predict, it's going to be developments of this technology that will end up being as good as we get on computer uploads.

    *Actually, it's the ultimate in identity theft, but whatever.


    Unfortunately, I suspect we'll get the singularity we deserve... damn humans!


    I should go a step further and state this as a principle: "Every sentient race gets the singularity they deserve."


    Speaking of Dark State. This morning my Google Android phone alerted me to the fact that "Charles Stross has released a new book 'Dark State' today". I've never told it I was going to buy that book the day it came out. I've not pre-ordered it and I've done no Google searches. It's never alerted me to any other author's activities which is also oddly accurate as there are no others that I'll buy on the first day.

    So it amazingly accurately figured out I was going to buy that book and no others. Weirdly, it completely failed to figure out that it's being released the day after tomorrow in Australia and thinks I live in the USA despite it knowing my home address and Australian phone number.


    It's probably poor taste to cite Woody Allen these days, but this does make me think of the following:

    “I don't want to achieve immortality through my work; I want to achieve immortality through not dying. I don't want to live on in the hearts of my countrymen; I want to live on in my apartment.”

    If you s/my apartment/a honking great computer/, this seems to be an expression of the thing the transhumanists wish for, put their faith in. That's specifically in contrast to what you've described, which is a sort of updated, Turing-test compliant version of living on through one's work in the memories of others. Maybe living on as a chatbot would be more than that, but dog knows Google and others could easily do that right now if they chose.

    There are traditionally two ways to attain if not immortality then at least "posterity"*: through works (one's own and those of others) or through having children. The direct memory of others who knew you is not sufficient - they need to write about it and for that writing to be preserved and read.

    The hope I guess we can hold out for is that there will be a last generation to which natural death is a thing, due to advancing medical science, and that generation is older than we are. Unfortunately I suspect that generation is still to be born. And that assumes no climate based population crash.

    • Ways that are know to work, anyway.

    I see. So you are criticizing me, but it's not necessarily me. And you avoid the question of whether Elderly Cynic was right or not, you are just complaining that I pointed out his mistakes.

    Furthermore, you accuse me of misinterpreting people, but without actually being specific enough to be refutable.

    Like, seriously, man?


    "I see."

    Exciting, isn't it!

    "So you are criticizing me, but it's not necessarily me."

    Do you think all observations are either criticisms or affirmations?

    Well, are you "inside the cult looking out"? I've no idea, so this was (maybe still is) an opportunity for you to deny being a true believer.

    This is one of those patterns in propositional logic that occasionally trips people. Given P->Q, does Q imply P? The correct answer is no. Q doesn't negate P either. In certain specific circumstances where the probability of encountering Q when P is not the case is known, it is possible to design an experiment to "fail to disprove" P enough times that it is very unlikely that "not P" is true (believe it or not, this is how DNA paternity testing is done), but that isn't available normally, where Q has the same value as an anecdote. Oh, and it trips people because P->Q (if P then Q) isn't the same as P<->Q (if and only if P then Q), and some people seem to have trouble picking which applies - that is, which claim is being made.

    " accuse me of misinterpreting people"

    You're choosing to interpret that as an accusation rather than as advice or an observation. I, however, have no interest in making a point and limited interest in arguing. My suggestion is to do something that makes you happy, then come back and re-read some of the things you responded to with the possibility in mind that some different interpretations may be available. Seriously, things are better with less adversarial bickering (man).


    "First of all, singularity is not meant in the literal diving-by-zero sense, but in the metaphorical physics-loses-predictability-when-hitting-singularities sense."

    Those are two aspects of the same phenomenon. I get very bored with people talking about the increasing rate of IT or other change, and I point out (to no avail) that it is slower than when I was younger and is currently slowing down. My point stands, unchanged.

    I can assure you that there IS speculation of the absence of a Turing/Goedel limit. And, while I could explain why what I posted is correct, I suspect that you would have difficulty following it. I am aware of that proof, but it is not that aspect I am referring to. And, as I said, explicitly, it is very unlikely.

    Your last paragraph shows that you don't understand computational or complexity theory. Even were that class of AI to be more intelligent than us, that is only in the sense that one human is more intelligent than another. And my remarks about NP were referring to the whole of that area, including BPP. Again, I could explain, but I suspect that you would have difficulty following.

    If anyone is interested, the issues that I referred to as tricky are NOT primarily to do with the actual computation, but the class of questions the models are considering, though the computational primitives have an effect on that. Specifically, yes/no versus an actual value, and deterministic versus probabilistic - and you will NOT find the latter in computer science textbooks.


    And I should know better than to try to represent symbols with plain text when there are perfectly good HTML entities available. That should be P→Q for IF P THEN Q and P↔Q for IFF P THEN Q.

    Anyhow, the pattern I've been referring to is called affirming the consequent, though it shows up with a twin called denying the antecedent. And I can't believe I'm talking high school logic (I blame fever).


    I'm not interested in engaging in meta-level arguments. If you want to make a point about the actual issues at hand I'll be happy to respond.


    "As I recall, I've personally met the first couple of arpanet spammers."

    That's going back a bit :-) I have been using the Internet only since 1979, though I was using wide-area networking a bit before that.

    Yeah. Fair use doesn't mean what most laymen think it does. If I recall, there was some discussion (when?) about constraining the proportion that any one agent could use but it was abandoned as a hopeless task. Some nodes have done (do?) it, but it's impossible to specify precisely. It's now politically impossible, as well as technically.


    You need to read more modern textbooks. This stuff is standard.


    Oh and ISTR that "AI as mimicry" theme being explored in a novel by a certain SF author.


    I have a PhD in theoretical physics and have published papers about quantum computing and complexity theory. There is no need to hold back on your arguments.

    As for needing to break outside of BPP to get a superintelligent AI, this is bollocks. It is true that an AI capable of solving NP-complete problems in polynomial time would be scarily powerful. It is also true that such an AI will never exist. But one can still get superhuman intelligence inside BPP. Just imagine a human being that thinks 1000 as fast, or can remember 1000 times more data than we do. These are just constant factors, that do not change the complexity class. Still such a human being would make minced meat out of me in any intellectual competition.

    I think it is clear that this is possible. Adding more memory and CPUs to a computer is easy. The clock rate, on the other hand, is much harder to increase, but is already on the gigahertz range. Our hardware, on the other hand, is fixed.

    And this is just about the hardware. Our software is laughable. It was evolved to hunt in the savannah. We suck at doing math. Now consider a software that is actually designed for intellectual prowess, and can take full advantage of the powerful hardware available. We have no chance.


    Well at 346 you are not really following Charlie's point, for reasons I go into at 380.

    "You seem, however, to be arguing for something stronger than that: that the concept of mind uploading, for example, is not worth exploring because it is derived from the Christian idea of a soul."

    My unpacking would be to say: the concept of mind uploading may not be possible. The only reason most people think "mind" is a valid concept is because it's part of our social-cultural-historical baggage and we treat it specially, in a way that may not be an accurate model for the actual situation. What we know from modern neuroscience suggests that it may be rather more complex - with all sorts of biological factors at play.

    The concept of "mind" that you need for "uploading" is not in fact demonstrated independently, doesn't reflect empirical evidence and we probably have it because of these Christian ideas we caught through our culture. It isn't that it is "tainted" be association with Christianity, it is that if Christianity is the only reason we think it's valid, then that isn't a good enough reason to treat it as valid.

    You may disagree that this is the only reason it is treated as valid, there are all sorts of interesting possibilities to explore (and note that I say explore rather than argue about). But the more that discussion about these points looks like the expression of a divergent, enclosed, even hermetic worldview - which (I think) Charlie has been suggesting things like Roko's Basilisk represent - the more it looks contrived and less likely to have an external referent. The more it looks like an offshoot of the background Christian worldview, in fact.

    I will avoid ranting about how having a RationalWiki implies there should also be an EmpiricalWiki. Philosophy of science is less meta and more core than many folks might think.


    Every word of which is true BUT What seems to set us & some other animals apart from being slow computing machines ( or even quite quick ones - differential equations processed to catch a moving ball, f'rinstance ) is "intuition" the ability to join up apparently unconnected data & observations to produce "Original Ideas" AFAIK there is no sign of this anywhere on the present AI horizon. And the Go-palying computer was optimazed for playin Go, not as a general AI, IIRC (?)

    If/When such an "ability" shows up or looks as though it is going to ... well, then is the time that a "real" AI as opposed to a very limited one is coming. WHeteher it would look like a Culture Mind, or something much nastier ... well that's the big question, form our p.o.v. isn't it?


    Not when I last looked. There was some elementary stuff, true, but I am talking about a proper analysis of the theory.


    I wouldn't bet on "intuition" saving us from irrelevancy. There isn't anything magical about our ability to produce "original ideas".

    There is at least one person seriously working on making human-like AI: Fields medallist Tim Gowers. He is trying to write a computer program that proves theorems in the way human mathematicians do. He wrote a series of blog posts about it starting here. You might also be interested in his lecture about it, or the paper.


    I wasn't really following Charlie's point. I was explicitly expressing uncertainty about what his point was.

    And I do indeed disagree that Christianity is the only reason why we think that the concept of "mind" is valid. The fact that a large amount of AI/cyborg/mind uploading fiction comes from Japan - a decidedly non-Christian country - is very good evidence against that. I think Pigeon's conjecture from this comment that the wide acceptance of the idea is due to the ubiquitousness of computers and the software/hardware analogy is much more plausible.

    Personally, I think that any physical process can be efficiently simulated with digital quantum computers (known as the Church-Turing-Deutsch thesis), and furthermore I guess that the quantum part is not relevant for human cognition, so digital classical computers should be enough to simulate our bodies well enough. But if you got to this point, I don't see what obstacle to human-like AI/mind uploading could possibly remain. Any relevant biological factors can be simulated as well. And there is clearly a huge amount of irrelevant biological processes that can be left out in order to improve simulation speed.

    Which discoveries of modern neuroscience are you alluding to? And how could they stop a human being from being simulated?


    You clearly don't know what you're talking about.

    YELLOW CARD WARNING: the above phrase is a classic ad hominem attack, and as such a violation of the moderation policy. If you do it again I'll ban you from this thread (a red card). Do it on another thread and I'll ban you permanently.

    Nuanced disagreement here is fine. Personal attacks are not.


    I'm sorry, I won't do that again.


    Re: '... I avoid eye contact because it makes me uncomfortable rather than because I'm lying when I tell you my name.'

    No idea whether this actually applies to you or is a convenient example.

    Anyways, I have met and worked with folks who were unable to do the eye contact thing. Since I have a tendency to ask things outright esp. in a work situation, I asked why. Fortunately, that person told me and there were no hurt feelings either way afterwards.

    In this particular instance, it was a mix of culture and personality. That person's native culture said: never look someone senior directly in the eyes - it's very rude/argumentative. At the same time, her personality tended toward extreme shyness - not a mixer with anyone. And although her interpersonal style was considered appropriate (normal) within her native culture, it's considered borderline rude in the more extroverted West.

    Cultures really make a difference in how and to what extent personality traits are allowed to emerge/are considered 'normal'.


    And there is clearly a huge amount of irrelevant biological processes that can be left out in order to improve simulation speed. Oops, stop right there ( I think ) How do you know that said biological processes are "irrelevant"? And are you 150% certain of that? It's a bit like what is now known to be the myth of "junk" DNA, doing nothing "useful" isn't it?


    Also ... And very unfortunately the blog posts mentioned involve a good understanding of Set Theory which is an empty field in my understanding. I've never done any at even the most elementary level.

    A short primer would be very useful


    Re: '... there is no way to define a partition of the population that separates arseholes and potential governors.'

    Yes there is - it's called neuro/psych. Unfortunately because psychoanalytic theory (aka SigFrd) was the be-all in the US* for decades but was subsequently shown via law suit (the only 'honest' test understood within this culture) to not be reliable, there's considerable reluctance to seriously consider that current psych is of any use.

    BTW, many large corps use psych testing in their hiring process, so if it's good enough for the large corp$**, it should be good enough for the common people.

      • Think we all know this: there's much more cultural spill-over from the US to the UK and other English-speaking countries than in the other direction.

    ** - Okay, the large corps are testing at the entry to mid levels mostly and for quite different and varying personality traits and aptitudes. But the tests exist and have a much better predictive success rate than humans in identifying 'best fit'.


    Re: ' ... biological processes are "irrelevant"?'

    Yes - would really like to hear this argument, plus see some tests and results.

    Also, definitely would need a primer on 'set theory'. Briefly touched on it in a Phil of Logic course ages ago, but given how often it is mentioned related to computing, this branch has probably grown quite a bit.


    I suspect that before we can "upload" a mind we'll need to have a perfect simulation of the human brain, including issues like brain chemistry and neural connectivity. Then we'll have to go to the next step and be able to simulate a particular human brain and only then will we be able to "upload" memories and thoughts.

    The idea that we'll be able to simply "run the program" of someone's mind on a futuristic PC without simulating brain conditions AND have that mind agree that it is Bill Smith or Juan Garcia is ridiculous. We'll need to simulate the actual organic substrate.


    Maybe I should write "...AND have that mind agree that it feels like Bill Smith or Juan Garcia is ridiculous" instead. Obviously the mind's name would be deeply encoded.


    The overwhelmingly general view of the Singularity does, literally, involve uploading you.

    Now, several items: 1. how do you know you've been uploaded? My answer is to ask if, when you try to observe, via eyes or whatever, what is the you that's looking? Have you been cloned into the computer (two points of view, whose consciousness diverges), do you have only one point of view (is the body dead, or unable to wake, with no brain waves?). 2. It's not so much as a Christian afterlife, or even an apotheosis, it's a way to get around dying. Now, what do you do once uploaded? I'd think you'd want to be able to do things in the RW, unless what happens inside the systems is that much more fascinating. (And can you have sex with another uploaded person?)

    For 2, I actually wrote a short story about an alien race's Singularity that, um, let's say it went a little wrong, and I'm looking for a venue, as I'm still shopping it around -it's a short.


    Yes, it does, and yes, they did violate fair use, and everyone in the newsgroups I hung out in agreed to that proposition.

    Spam was not within fair use; I don't quite remember if commercial advertising was allowed yet, and certainly not cross-posting to every single newsgroup.


    That's definitely a problem. And those who are attracted to positions of power are generally the last people who should have access to them. Which lets out just about everything but choosing your selected authorities by lottery. Large democratic systems always present too large an attack surface. Civil service has a history of developing into aristocracy. So does war lords. Monarchy selects initially for the power hungry and then later for the stupid. Oligarchy is actually less bad than most of those, though it also has it's unpleasant failure modes.

    The thing is, if you select your leaders by lottery, it becomes even more important that power be decentralized. Of course, it would be possible to argue that it would be hard to do worse than the current systems, but I'm not certain this is true. We've avoided thermo-nuclear war for over 50 years now...and some of the governments have seemed to mean well (as well as being greedy, selfish, self-important, power-hungry nepotists).


    My mistake, I should have linked to the last post in the series instead, as it goes over the AI stuff and mostly avoid mathematics.

    He started the series by showing different solutions to some toy problems in analysis, and asked the readers to rate them in terms of clarity, and to guess which were written by humans and which by a computer. His goal was to find out whether his program could pass this sort of mathematical Turing test.

    As such the mathematical content of the problems is not relevant, so I hope you'll forgive for not teaching analysis to anyone.


    I think that you have a different definition of "The Singularity" than I do, and also a different one that Vernor Vinge used in the original paper.

    I admit that I tend to include things that he does not indicate inclusion of, so that my definition is wider than his, but his initial definition is a lot wider than that of many people.

    By my definition a Singularity is guaranteed, as the current situation has no stable continuation. I hope for a good outcome, though I actually estimate odds of 50% or less. And I count global-thermonuclear war as a Singularity. I count SuperHuman AI as a Singularity. I count 75% or more lethal biological warfare as a Singularity. There are lots of paths to the Singularity. Some of them are desirable. And there's no way to avoid it happening. (We've already come within minutes of global thermo-nuclear war.)


    I'm just guessing that some biological processes are irrelevant, based on general reductionist grounds. Things like hair growing, digestion (I often think while taking a dump [as I'm doing right now], but I'm perfectly capable of thinking outside the toilet), muscle function, etc., are obviously irrelevant.

    As for the more interesting question about the biological processes happening in the brain, you'd be better off talking to a neurologist, I don't have anything beyond lay knowledge here. Still, I'd be very surprised if the detailed chemistry of the brain were relevant, as opposed to its abstract description in terms of information processing.


    IMO thats an overly flexible definition of the singularity, which can be effectively summed up as the "end of life as we know it now". I propose that if you use that definition you can take any 2 reasonably spaced points on the timeline of human existence and term the transition "a singularity" and would include but not be limited to the Ice Ages, the colonisation of America and Australia to name but two.

    Its also worth observing that your type of singularities are very much dependent on the viewpoint of the observer...


    Unless you can reverse entropy, immortality is not possible in this universe. Even if you could it would be dubious. And I'm not even sure it's desirable. Ask me in a couple of thousand years.

    FWIW, I think that the mathematical concept of infinity is a mistake. It's something that they picked up because it's hard to think about edges of things...they're never sharp. You could think of this as a corollary to Heisenberg's uncertainty. I suspect, however, that we're a sufficiently long way from the universe edges that we can't see the fuzziness. My guess is that the universe becomes discontinuous at around 10^-33 cm. Black holes certainly don't have sharp edges. Etc. So infinity is a useful crutch to calculate with. The mistake comes when you think of it as representing something actual rather than a good approximation.


    Actually, while human mental processes are poor for some purposes, for others they are quite good. Object recognition in the face of adversarial imagery, e.g.

    That said, it's not clear that a "super human AI" would be designed to do math well. That would probably be a mistake in the design. (Handle that through library calls to a separate non-intelligent system.)

    What a good AI design would have as it's advantage is the ability to replicate identical copies and segment the tasks onto different machines...O, yes, and good communication with it's twins. It would scale it's breadth of intelligence so that each shard could have a deeper intelligence. (The final evaluation function might not be as good as that of a human, but it would be several plys further into evaluation.) It's not clear to me how this analysis translates into a "deep learning" system, but I'm rather certain that it does. And I'm not certain that "deep learning" is the final model. (I tend to think it will be a blend of "deep learning" and an improved "evolutionary computing" model.)

    But do consider how easily the non-intelligent sub-tasks could be handed off to a library routine. And most tasks that don't involve search or compression are basically non-intelligent.


    Computer AI has already shown intuition in limited domains. In well defined domains computer AIs have shown intelligence superior to humans. Alpha Go is a series of examples of that.

    So intuition is not something that distinguishes humans from AIs. It is interesting that no-limit poker was a lot harder for AIs to handle than poker with limits. They had to come up with new ways to handle things. But they did.

    General cases are inherently more difficult to handle than more specialized cases, but that's not evidence that they can't be solved. Of course, sometimes they can't, but in all cases that can be proven to be something that a computer can't solve, people can't solve it either...though they often use good heuristics to come to "good enough" solutions. And there's no reason computers couldn't use the same approach.


    I gut feel agree with you on this one. There is a lot of circumstantial evidence to suggest that the individual features are more important than the whole in a lot of biological processes, and whilst I'm teasing a little I'd point to the works of Peter Watts covering human vision and consciousness as proof enough for a SciFi blog.

    Less tongue in check and more in my sphere of learning - there is huge evidence that we don't see what we think we see, and we see similar optimisations appearing in spontaneously in machine vision models. Given that vision is so intimately tied to our processing of that vision it doesn't seem a huge leap to assume something analogous with the simulation of that processing.

    That doesn't of course preclude the devil being in the detail - and just because a simulation of a person is dependent on selected details it doesn't follow that they are easily identifiable. that leads to 2 alternatives to whole biological system simulation - the "point details" approach or the "low gross resolution" approach. i.e. simulate a small amount well or a large but finite amount just good enough.


    For most purposes, all you need is the very basic elements of set theory. I actually have some strong disagreements with most modern set theory which involved things like infinite sets, but the basics are quite simple. Sets hold uniquely identified objects. If you take the union of two sets, things which are in both only exist once in the union, but everything that was in either is in the result. Etc.

    If you know maps in Java or hash tables or even red-black trees, a set is like one of those with unique keys. They're the basis out of which, among other things, SQL was created. If you know databases or key-value stores, a set is one with the requirement of a unique key.

    (OK, I said I didn't accept infinite sets. There are also sets whose membership is algorithmically defined. I'm quite dubious about them, as some of them tend to lead to conclusions that I find unreasonable. But if you don't accept them, you also can't accept the irrational numbers...which I tend to think of as convenient computational tools that are "close enough" to accurate for most purposes, but which should never be though of as actual. Like pi. 2^64 digits should be enough for anyone.)


    Well, my definition is only a slight extension of Vernor Vinge's, and he wrote the paper that it's all based on. Of course he called it The Technological Singularity, and I don't limit it to that as I'd include a giant meteor impact, but most of the examples I gave are actually the result of humans using tech to totally change the way things work. That certainly applies to both thermo-nuclear war and 75% lethal biological war. It wouldn't apply to a natural pandemic unless that was caused by, say, densely raising pigs and chickens close to each other.

    I will grant that a lot of people have focused heavily on a small percentage of the things he considered. The ones that they find hopeful, without too much regard to probability, but that's just the way people normally work. You pick the future that you want and try to reach it.


    I'm just guessing that some biological processes are irrelevant, based on general reductionist grounds. Things like hair growing, digestion (I often think while taking a dump [as I'm doing right now], but I'm perfectly capable of thinking outside the toilet), muscle function, etc., are obviously irrelevant.

    Not necessarily.

    Researchers have identified gut microbiota that interact with brain regions associated with mood and behavior.

    The brain has a direct effect on the stomach. For example, the very thought of eating can release the stomach's juices before food gets there. This connection goes both ways. A troubled intestine can send signals to the brain, just as a troubled brain can send signals to the gut. Therefore, a person's stomach or intestinal distress can be the cause or the product of anxiety, stress, or depression. That's because the brain and the gastrointestinal (GI) system are intimately connected.

    This avenue of research has been around since the early 1900s, when doctors and scientists wrote a lot about how the contents of the colon—and harmful bacteria living there—could contribute to fatigue, depression and neuroses…

    These early forays into understanding the influence of gut bacteria on the brain were eventually dismissed as pseudoscience. However, in the past 10 years scientists have begun to reexamine the link between the gut and brain. As more studies are done, researchers are discovering that communication between the gut and brain is actually a two-way street. The brain influences both the immune and gastrointestinal functions, which can alter the makeup of the gut’s microbiome. In turn, bacteria in the gut produce neuroactive compounds—neurotransmitters and other metabolites that can act on the brain. Some of these have also been found to influence the permeability of the blood-brain barrier in mice, which keeps harmful substances in the blood from entering the brain.

    There's similar links between thinks like muscle movements and moods. Simulating an emotional expression (eg. smiling or frowning) make you more susceptible to that emotion.

    Or look at the effects of hormones on behaviour. Testosterone levels and risky behaviour are correlated. Or things like parasitic infections (such as toxoplasma gondii)…

    The mind and body seem (to me, anyway) to be intimately entangled.


    A question that enthusiasts rarely bother to answer is:

    Assuming that mind uploading is possible, what do you have to offer that makes it worth peoples while to expend the resources to keep you running?

    I remember reading one short story that had a subscriber to a bankrupt AI heaven spending the rest of his existence classifying other peoples spam. That seemed optimistic to me.


    I was taking the city bus to school in grade 5 too. Two buses with a transfer (very indirect route).

    The difference between Primary (until 11/12 years old) and Secondary (12/13 onward) at our Ministry of Defence-run boarding school, was that Primary children had to be picked up from the school at the end of term.

    Secondary children were handed tickets and passport, taken to Dunblane railway station, and left to get on with it. In many cases (rather a lot of parents were based in Germany as part of BAOR), this involved train to a city, bus to the airport, and multiple flights; central Scotland to (say) Hanover Airport. We just got on with it, and saw it as entirely normal (although granted, there would normally be a few others heading in the same direction).

    I did once get mugged, at age 14 or so, in Green Park tube station - travelling from Inverness to Dusseldorf, solo - but that's a different story.

    Nowadays it wouldn't be allowed — enforced through anonymous complaints to the authorities.

    One of our neighbours made a confidential complaint to the local Council about another neighbour's chickens in their back garden. The Council, regretfully, enforced a "no livestock" rule "on the basis of an anonymous complaint about the noise of the animals", and the chickens had to be removed. The fact that our houses back directly onto mature woodland, inhabited by pheasants, deer, foxes, and owls, was seen as irrelevant.

    The Council then wrote to the complainant - except the letter was accidentally delivered to, and opened by, the house who had just lost their chickens... Dear Mrs.X, regarding your complaint about the noise of the chickens from number 19, we hope that....

    Of course, no-one told Mrs.X... Amusingly, the delightfully two-faced Mrs.X also tried to cover up by suggesting to a different neighbour, who like everyone else had heard the tale of the letter, "that it was obviously that pair (my wife and I) who had made the complaint"... they didn't tell us until after she'd moved away, thankfully ;)


    Greg Egan explores this question at length in Permutation City.

    One thing you could do is write science fiction novels ;p


    Your brain controls your gut, nothing new here. When your gut malfunctions, you're unhappy, when you have enough food in it, you're happy. Nothing new here either.

    As for the brain ordering the mouth muscles to smile making you more susceptible to happiness: no evidence that anything is happening outside the brain. In any case, Stephen Hawking is almost totally paralysed, but I still count him as intelligent.

    As for the hormones: they are just dumb signals sent into the brain. One can quite literally simulate their effect by injecting someone with the compound (as opposed to letting it be naturally produced by the body).

    In general, there is no information processing happening outside the brain. All these system you mention can be simulated by simple sets of inputs/outputs. Furthermore, we have good evidence that the brain can work without them, from people that do not produce hormones, or have muscle paralysis, or have their stomach/intestine removed.

    Of course people are not happy about having their body chopped up, but they are still clearly conscious and intelligent.


    I don't think such a general definition of Singularity is very interesting, especially if it is so broad that it is guaranteed to happen. I find more interesting the narrow question about whether exponentially self-improving AI is possible.

    (Maybe I should add that eternal exponentially self-improving AI is obviously impossible, as it would quickly hit fundamental physical bounds and taper down to a sigmoidal growth. But sigmoidally self-improving AI doesn't quite roll off the tongue ;)


    As for the hormones: they are just dumb signals sent into the brain. ... In general, there is no information processing happening outside the brain.

    My understanding is that that opinion is in contradiction to current state of play, and looks increasingly likely to be either the result of a definitional decision or a misunderstanding. Viz, if you define information processing as what the brain does then your ideas are tautologically correct.

    Once you start digging into "how does external information get into the body" detail work it all gets much more exciting. At the trivial end, "how many senses do people have" has gone from four of five to between 10 and 20 depending on who's counting.

    We can also play "what is human", asking whether mitochondria are part of us or separate species (I suspect most people say part of us), but from there there's a whole range on up to the spiders that live in our eyebrows, some of which we can live without and some we can't. As antibiotic users are increasingly discovering, "can live without" is a spectrum and at the bottom end quality of life is pretty poor (you can also live without a kidney, 3/4 of your liver, all your limbs, a big chunk of your spinal cord...).

    The point is that some of those species most definitely process information. If they're part of us... information processing is happening outside our brains. If they're not part of us, they're just symbiotes we can't live without, there's still information processing it's just not done by us.


    A question that enthusiasts rarely bother to answer is: Assuming that mind uploading is possible, what do you have to offer that makes it worth peoples while to expend the resources to keep you running?

    There's a trivial solution to the question of how many resources it takes to run a human mind: about 500 watts, and a CPU that consists of about 1.5-2 kilograms of impure oil/water emulsion, aka a human brain. The 500 watts includes running the peripherals (i.e. your body) on an ongoing basis.

    The original human brain that we're emulating may be the most compact possible platform for a human mind to run on; but it also does a bunch of other things that we might be able to get rid of via deduplication. For example, every human neuron contains its own nucleus and DNA replication framework; if it's not directly implicated in the neural functioning we're interested in (i.e. producing and propagating an action potential and trans-synaptic signal processing) then we might not need to replicate it on the order of 10^11 times in our emulation.

    So my guess is that a mature mind uploading technology might require well under half a kilowatt per mind, which is supplied by about two square meters of photovoltaic panels in Earth orbit (above atmospheric transmission loss).

    Upshot: a mature mind uploading technology should in principle be extremely cheap to run, in today's energy-economic terms.


    I think you're onto something here.

    Someone who has invested untold resources of effort, time, and belief into a theory that human thinking can be mapped onto pure number crunching is going to have trouble coping with ideas that are contrary to this notion.

    For this person, the effects of something like Toxoplasma gondii, or any other organism that causes measurable changes in thought processes (up to and including the rabies virus, I fear), might be dismissed out of hand as irrelevant, even if they test positive for the parasite. They'll probably define psychoactive drugs, from caffeine past LSD, as irrelevant as well.

    The even more fundamental problem with any ideologue that espouses purely mathematical models about reality is that there's a subtle but pervasive confirmation bias at play. Math is about stuff that can be quantized. Someone who's biased towards reality being mathematical will point to the successes of math at explaining phenomena and claim that as evidence that they're correct. A skeptic of the theory will point to all the hard-to-quantify blobs and smears in the universe, at all scales, as evidence that the math-heads only notice the tractable problems, and thereby fool themselves into believing that math is all.

    This is all very abstract, so here's an example: fungal population genetics. Population genetics in general is about how variations of alleles manifest in a population, and it has all sorts of neat statistics. Doing population genetics on fungi is hard, because, among other things, it's hard to determine what constitutes an individual fungus. For most multicellular fungi, if you chop an individual in half, you have two fungi, and if you let these two ramets sit next to each other, they might fuse back in to a single individual. Indeed, a good chunk of the phylum basidiomycota reproduces by fusing individuals, and its normal in many basidiomycete species to find individuals one genome (n) as well as two genomes (n+n). Nuclear fusion (where n+n becomes 2n) only occurs in these fungi in cells that produce haploid (n) spores.

    What counts as an individual to a fungus, either genetically or physically? It's something that's easy to experience, hard to count, and central to life on this planet. Personally, I started seriously thinking about Buddhism's whole point of the fallacy of ego existing while I was trying to understand fungi, and that's how we get back to the idea of being able to upload a human into a computer. Humans are so blobular and boundary-ignoring that I'm not sure we can even do that effectively.


    I just had thoughts about the combination of deduplication and running multiple human simulations. I HOPE YOU ARE HAPPY.


    I started seriously thinking about Buddhism's whole point of the fallacy of ego existing while I was trying to understand fungi,

    {grin} I just read New Scientist and try to nod knowingly at the right moments. I'm more challenged by "where does the brain end" and "what mechanism produces consciousness" way back in time. I mean, I have been exploiting bugs in dog brains since way back, but I'm also aware that (most modern) dogs only exist due to bugs in human brains[1].

    For me there was a crisis moment when my increasingly desperate attempts to pretend that mind-body dualism was possible went to shit both literally and figuratively. I was given some fairly nasty antibiotics which did things to my microbiome. After feeling out of sorts to the point of suicide I "discovered" a link between fixing my yuppie first-world non-problem of "non-celiac gluten sensitivity" and mental health. Diet changes, then eating various disgusting cultures not only made various gut upsets drop dramatically in frequency, I felt better too. Causation seemed very much to run from diet to brain, albeit via "brain says: change diet", arguably preserving the mind as paramount.

    Note that there's a difference between "I'm sick and that makes me feel unhappy about being sick" and "I have a minor physical problem and also clinical depression". Please argue about which applied to me somewhere else if you have to argue it at all.

    [1] viz, working dogs are not a bug, because laziness is a valuable trait. But cuteness is an exploit - dogs are not human babies.


    "...mature mind uploading technology might require well under half a kilowatt per mind"

    I had also considered dedupe for genuinely uniform entities. Something like an ontology and a collection of classes, though I am not seeing clearly whether or not instantiated entities need to be mutable anyway. Optimised, that might not matter if mutability is not often required (a copy on write strategy would work I suppose).

    I'd suggest that a larger saving on energy comes from the fact that while the system might need to emulate energetic processes like muscle contraction, respiration and digestion, the actual energy expenditures for these processes would not be required. Conversely, emulation is more compute intensive than running something natively, though I agree mature emulators benefit heavily from optimisations. We just don't know how complex the required emulation might be at this point.

    There is a query and potential problem that occurs to me, one that dedupe touches on. In a way it depends on how deep we need the model to go, as I asked above whether emulation at the molecular level is deep enough. I get that where uncertainty comes into play, a lot can be modeled with heuristics or truth tables, or even rules of thumb (which I suspect is why our theoretical physicist friend is so confident that everything can be done from within a classical computing paradigm). But this doesn't need to involve uncertainty... treating hormonal responses as "external" and reducing them to inputs/outputs still implies some kind of algorithmic gaming around how the "external" system determines these inputs. Likewise gut bacteria and other internal symbionts.

    The query is in terms of what mischief could be wrought on the subjective experience of the emulated person through gaming the heuristics or simplification rules in place for any sub-entity that is simplified. I'm not suggesting that modeling these additional entites separately isn't achievable, but the question is more around how subjective the resulting experience remains, versus how much it might be biased to certain responses in a way that doesn't occur on the squishware. Whether that means preserving free will is a matter of jealously guarding the integrity of your random number generator. For example, the statement "the brain controls the gut" is somewhat naive: if anything it's the other way around: certain inputs will predispose the subjective entity to certain viewpoints and decisions.

    Obviously we're subject to this stuff as we are now, but the difference is that in the emulated version someone gets to make a conscious decision about how these things would work. There is a definite cession of some control over subjective experience (no matter how a process of uploading might actually work) to external parties. To the level of whether running, say, on Dell hardware may, just by coincidence, lead to your views aligning with the interests of the State of Texas.


    Look, if you want to be simulated together with the spiders in your eyebrows, have fun. I, for one, am more interested in the thoughts from my own brain, and am happy to not need to clip my toenails.