Back to: The obligatory general election discussion post | Forward to: Introducing Dead Lies Dreaming

Artificial Intelligence: Threat or Menace?

(This is the text of a keynote talk I just delivered at the IT Futures conference held by the University of Edinburgh Informatics centre today. NB: Some typos exist; I'll fix them tonight.)

Good morning. I'm Charlie Stross, and I tell lies for money. That is, I write fiction—deliberate non-truths designed to inform, amuse, and examine the human condition. More specifically, I'm a science fiction writer, mostly focusing on the intersection between the human condition and our technological and scientific environment: less Star Wars, more about bank heists inside massively multiplayer computer games, or the happy fun prospects for 3D printer malware.

One of the besetting problems of near-future science fiction is that life comes at you really fast these days. Back when I agreed to give this talk, I had no idea we'd be facing a general election campaign — much less that the outcome would already be known, with consequences that pretty comprehensively upset any predictions I was making back in September.

So, because I'm chicken, I'm going to ignore current events and instead take this opportunity to remind you that I can't predict the future. No science fiction writer can. Predicting the future isn't what science fiction is about. As the late Edsger Djikstra observed, "computer science is no more about computers than astronomy is about telescopes." He might well have added, or science fiction is about predicting the future. What I try to do is examine the human implications of possible developments, and imagine what consequences they might have. (Hopefully entertainingly enough to convince the general public to buy my books.)

So: first, let me tell you some of my baseline assumptions so that you can point and mock when you re-read the transcript of this talk in a decade's time.

Ten years in the future, we will be living in a world recognizable as having emerged from the current one by a process of continuous change. About 85% of everyone alive in 2029 is already alive in 2019. (Similarly, most of the people who're alive now will still be alive a decade hence, barring disasters on a historic scale.)

Here in the UK the average home is 75 years old, so we can reasonably expect most of the urban landscape of 2029 to be familiar. I moved to Edinburgh in 1995: while the Informatics Forum is new, as a side-effect of the disastrous 2002 old town fire, many of the university premises are historic. Similarly, half the cars on the road today will still be on the roads in 2029, although I expect most of the diesel fleet will have been retired due to exhaust emissions, and there will be far more electric vehicles around.

You don't need a science fiction writer to tell you this stuff: 90% of the world of tomorrow plus ten years is obvious to anyone with a weekly subscription to New Scientist and more imagination than a doorknob.

What's less obvious is the 10% of the future that isn't here yet. Of that 10%, you used to be able to guess most of it — 9% of the total — by reading technology road maps in specialist industry publications. We know what airliners Boeing and Airbus are starting development work on, we can plot the long-term price curve for photovoltaic panels, read the road maps Intel and ARM provide for hardware vendors, and so on. It was fairly obvious in 2009 that Microsoft would still be pushing some version of Windows as a platform for their hugely lucrative business apps, and that Apple would have some version of NeXTStep — excuse me, macOS — as a key element of their vertically integrated hardware business. You could run the same guessing game for medicines by looking at clinical trials reports, and seeing which drugs were entering second-stage trials — an essential but hugely expensive prerequisite for a product license, which requires a manufacturer to be committed to getting the drug on the market by any means possible (unless there's a last-minute show-stopper), 5-10 years down the line.

Obsolescence is also largely predictable. The long-drawn-out death of the pocket camera was clearly visible on the horizon back in 2009, as cameras in smartphones were becoming ubiquitous: ditto the death of the pocket GPS system, the compass, the camcorder, the PDA, the mp3 player, the ebook reader, the pocket games console, and the pager. Smartphones are technological cannibals, swallowing up every available portable electronic device that can be crammed inside its form factor.

However, this stuff ignores what Donald Rumsfeld named "the unknown unknowns". About 1% of the world of ten years hence always seems to have sprung fully-formed from the who-ordered-THAT dimension: we always get landed with stuff nobody foresaw or could possibly have anticipated, unless they were spectacularly lucky guessers or had access to amazing hallucinogens. And this 1% fraction of unknown unknowns regularly derails near-future predictions.

In the 1950s and 1960s, futurologists were obsessed with resource depletion, the population bubble, and famine: Paul Ehrlich and the other heirs of Thomas Malthus predicted wide-scale starvation by the mid-1970s as the human population bloated past the unthinkable four billion mark. They were wrong, as it turned out, because of the unnoticed work of a quiet agronomist, Norman Borlaug, who was pioneering new high yield crop strains: what became known as the Green Revolution more than doubled global agricultural yields within the span of a couple of decades. Meanwhile, it turned out that the most effective throttle on population growth was female education and emancipation: the rate of growth has slowed drastically and even reversed in some countries, and WHO estimates of peak population have been falling continuously as long as I can remember. So the take-away I'd like you to keep is that the 1% of unknown unknowns are often the most significant influences on long-term change.

If I was going to take a stab at identifying a potential 1% factor, the unknown unknowns that dominate for the second and third decade of the 21st century, I wouldn't point to climate change — the dismal figures are already quite clear — but to the rise of algorithmically targeted advertising campaigns combined with the ascendancy of social networking. Our news media, driven by the drive to maximize advertising click-throughs for revenue, have been locked in a race to the bottom for years now. In the past half-decade this has been weaponized, in conjunction with data mining of the piles of personal information social networks try to get us to disclose (in the pursuit of advertising bucks), to deliver toxic propaganda straight into the eyeballs of the most vulnerable — with consequences that are threaten to undermine the legitimacy of democratic governmance on a global scale.

Today's internet ads are qualitatively different from the direct mail campaigns of yore. In the age of paper, direct mail came with a steep price of entry, which effectively limited it in scope — also, the print distribution chain was it relatively easy to police. The efflorescence of spam from 1992 onwards should have warned us that junk information drives out good, but the spam kings of the 1990s were just the harbinger of today's information apocalypse. The cost of pumping out misinformation is frighteningly close to zero, and bad information drives out good: if the propaganda is outrageous and exciting it goes viral and spreads itself for free.

The recommendation algorithms used by YouTube, Facebook, and Twitter exploit this effect to maximize audience participation in pursuit of maximize advertising click-throughs. They promote popular related content, thereby prioritizing controversial and superficially plausible narratives. Viewer engagement is used to iteratively fine-tune the selection of content so that it is more appealing, but it tends to trap us in filter bubbles of material that reinforces our own existing beliefs. And bad actors have learned to game these systems to promote dubious content. It's not just Cambridge Analytica I'm talking about here, or allegations of Russian state meddling in the 2016 US presidential election. Consider the spread of anti-vaccination talking points and wild conspiracy theories, which are no longer fringe phenomena but mass movements with enough media traction to generate public health emergencies in Samoa and drive-by shootings in Washington DC. Or the spread of algorithmically generated knock-offs of children's TV shows proliferating on YouTube that caught the public eye last year.

... And then there's the cute cat photo thing. If I could take a time machine back to 1989 and tell an audience like yourselves that in 30 years time we'd all have pocket supercomputers that place all of human knowledge at our fingertips, but we'd mostly use them for looking at kitten videos and nattering about why vaccination is bad for your health, you'd have me sectioned under the Mental Health Act. And you'd be acting reasonably by the standards of the day: because unlike fiction, emergent human culture is under no obligation to make sense.

Let's get back to the 90/9/1 percent distribution, that applies to the components of the near future: 90% here today, 9% not here yet but on the drawing boards, and 1% unpredictable. I came up with that rule of thumb around 2005, but the ratio seems to be shifting these days. Changes happen faster, and there are more disruptive unknown-unknowns hitting us from all quarters with every passing decade. This is a long-established trend: throughout most of recorded history, the average person lived their life pretty much the same way as their parents and grandparents. Long-term economic growth averaged less than 0.1% per year over the past two thousand years. It has only been since the onset of the industrial revolution that change has become a dominant influence on human society. I suspect the 90/9/1 distribution is now something more like 85/10/5 — that is, 85% of the world of 2029 is here today, about 10% can be anticipated, and the random, unwelcome surprises constitute up to 5% of the mix. Which is kind of alarming, when you pause to think about it.

In the natural world, we're experiencing extreme weather events caused by anthropogenic climate change at an increasing frequency. Back in 1989, or 2009, climate change was a predictable thing that mostly lay in the future: today in 2019, or tomorrow in 2029, random-seeming extreme events (the short-term consequences of long-term climactic change) are becoming commonplace. Once-a-millennium weather outrages are already happening once a decade: by 2029 it's going to be much, much worse, and we can expect the onset of destabilization of global agriculture, resulting in seemingly random food shortages as one region or another succumbs to drought, famine, or wildfire.

In the human cultural sphere, the internet is pushing fifty years old, and not only have we become used to it as a communications medium, we've learned how to second-guess and game it. 2.5 billion people are on Facebook, and the internet reaches almost half the global population. I'm a man of certain political convictions, and I'm trying very hard to remain impartial here, but we have just come through a spectacularly dirty election campaign in which home-grown disinformation (never mind propaganda by external state-level actors) has made it almost impossible to get trustworthy information about topics relating to party policies. One party renamed its Twitter-verified feed from its own name to FactCheckUK for the duration of a televised debate. Again, we've seen search engine optimization techniques deployed successfully by a party leader — let's call him Alexander de Pfeffel something-or-other — who talked at length during a TV interview about his pastime of making cardboard model coaches. This led Google and other search engines to downrank a certain referendum bus with a promise about saving £350M a week for the NHS painted on its side, a promise which by this time had become deeply embarrassing.

This sort of tactic is viable in the short term, but in the long term is incredibly corrosive to public trust in the media — in all media.

Nor are the upheavals confined to the internet.

Over the past two decades we've seen revolutions in stock market and forex trading. At first it was just competition for rackspace as close as possible to the stock exchange switches, to minimize packet latency — we're seeing the same thing playing out on a smaller scale among committed gamers, picking and choosing ISPs for the lowest latency — then the high frequency trading arms race, in which case fuzzing the market by injecting "noise" in the shape of tiny but frequent trades allowed volume traders to pick up an edge (and effectively made small-scale day traders obsolete). I lack inside information but I'm pretty sure if you did a deep dive into what's going on behind the trading desks at FTSE and NASDAQ today you'd find a lot of powerful GPU clusters running Generative Adversarial Networks to manage trades in billions of pounds' worth of assets. Lights out, nobody home, just the products of the post-2012 boom in deep learning hard at work, earning money on behalf of the old, slow, procedural AIs we call corporations.

What do I mean by that — calling corporations AIs?

Although speculation about mechanical minds goes back a lot further, the field of Artificial Intelligence was largely popularized and publicized by the groundbreaking 1956 Dartmouth Conference organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester of IBM. The proposal for the conference asserted that, "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it", a proposition that I think many of us here would agree with, or at least be willing to debate. (Alan Turing sends his apologies.) Furthermore, I believe mechanisms exhibiting many of the features of human intelligence had already existed for some centuries by 1956, in the shape of corporations and other bureaucracies. A bureaucracy is a framework for automating decision processes that a human being might otherwise carry out, using human bodies (and brains) as components: a corporation adds a goal-seeking constraints and real-world i/o to the procedural rules-based element.

As justification for this outrageous assertion — that corporations are AIs — I'd like to steal philosopher John Searle's "Chinese Room" thought experiment and misapply it creatively. Searle, a skeptic about the post-Dartmouth Hard AI project — the proposition that symbolic computation could be used to build a mind — suggested the thought experiment as a way to discredit the idea that a digital computer executing a program can be said to have a mind. But I think he inadvertently demonstrated something quite different.

To crib shamelessly from wikipedia:

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has constructed a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer comfortably passes the Turing test, by convincing a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle asks is: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted by the user as demonstrating intelligent conversation. But Searle himself would not be able to understand the conversation.

The problem with this argument is that it is apparent that a company is nothing but a very big Chinese Room, containing a large number of John Searles, all working away at their rule sets and inputs. We many not agree that an AI "understands" Chinese, but we can agree that it performs symbolic manipulation; and a room full of bureaucrats looks awfully similar to a hypothetical Turing-test-passing procedural AI from here.

Companies don't literally try to pass the Turing test, but they exchange information with other companies — and they are powerful enough to process inputs far beyond the capacity of an individual human brain. A Boeing 787 airliner contains on the order of six million parts and is produced by a consortium of suppliers (coordinated by Boeing); designing it is several orders of magnitude beyond the competence of any individual engineer, but the Boeing "Chinese Room" nevertheless developed a process for designing, testing, manufacturing, and maintaining such a machine, and it's a process that is not reliant on any sole human being.

Where, then, is Boeing's mind?

I don't think Boeing has a mind as such, but it functions as an ad-hoc rules-based AI system, and exhibits drives that mirror those of an actual life form. Corporations grow, predate on one another, seek out sources of nutrition (revenue streams), and invade new environmental niches. Corporations exhibit metabolism, in the broadest sense of the word — they take in inputs and modify them, then produce outputs, including a surplus of money that pays for more inputs. Like all life forms they exist to copy information into the future. They treat human beings as interchangeable components, like cells in a body: they function as superorganisms — hive entities — and they reap efficiency benefits when they replace fallible and fragile human components with automated replacements.

Until relatively recently the automation of corporate functions was limited to mid-level bookkeeping operations — replacing ledgers with spreadsheets and databases — but we're now seeing the spread of robotic systems outside manufacturing to areas such as lights-out warehousing, and the first deployments of deep learning systems for decision support.

I spoke about this at length a couple of years ago in a talk I delivered at the Chaos Communications Congress in Leipzig, titled "Dude, You Broke the Future" — you can find it on YouTube and a text transcript on my blog — so I'm not going to dive back into that topic today. Instead I'm going to talk about some implications of the post-2012 AI boom that weren't obvious to me two years ago.

Corporations aren't the only pre-electronic artificial intelligences we've developed. Any bureaucracy is a rules-based information processing system. Governments are superorganisms that behave like very large corporations, but differ insofar as they can raise taxes (thereby creating demand for circulating money, which they issue), stimulating economic activity. They can recirculate their revenue through constructive channels such as infrastructure maintenance, or destructive ones such as military adventurism. Like corporations, governments are potentially immortal until an external threat or internal decay damages them beyond repair. By promulgating and enforcing laws, governments provide an external environment within which the much smaller rules-based corporations can exist.

(I should note that at this level, it doesn't matter whether the government's claim to legitimacy is based on the will of the people, the divine right of kings, or the Flying Spaghetti Monster: I'm talking about the mechanical working of a civil service bureaucracy, what it does rather than why it does it.)

And of course this brings me to a third species of organism: academic institutions like the University of Edinburgh.

Viewed as a corporation, the University of Edinburgh is impressively large. With roughly 4000 academic staff, 5000 administrative staff, and 36,000 undergraduate and postgraduate students (who may be considered as a weird chimera of customers and freelance contractors), it has a budget of close to a billion pounds a year. Like other human superorganisms, Edinburgh University exists to copy itself into the future — the climactic product of a university education is, of course, a professor (or alternatively a senior administrator), and if you assemble a critical mass of lecturers and administrators in one place and give them a budget and incentives to seek out research funding and students, you end up with an academic institution.

Quantity, as the military say, has a quality all of its own. Just as the Boeing Corporation can undertake engineering tasks that dwarf anything a solitary human can expect to achieve within their lifetime, so too can an institution out-strip the educational or research capabilities of a lone academic. That's why we have universities: they exist to provide a basis for collaboration, quality control, and information exchange. In an idealized model university, peers review one another's research results and allocate resources to future investigations, meanwhile training undergraduate students and guiding postgraduates, some of whom will become the next generation of researchers and teachers. (In reality, like a swan gliding serenely across the surface of a pond, there's a lot of thrashing around going on beneath the surface.)

The corpus of knowledge that a student needs to assimilate to reach the coal face of their chosen field exceeds the competence of any single educator, so we have division of labour and specialization among the teachers: and the same goes for the practice of research (and, dare I say it, writing proposals and grant applications).

Is the University of Edinburgh itself an artificial intelligence, then?

I'm going to go out on a limb here and say " not yet". While the University Court is a body corporate established by statute, and the administration of any billion pound organization of necessity shares traits with the other rules-based bureaucracies, we can't reasonably ascribe a theory of mind, or actual self-aware consciousness, to a university. Indeed, we can't ascribe consciousness to any of the organizations and processes around us that we call AI.

Artificial Intelligence really has come to mean three different things these days, although they all fall under the heading of "decision making systems opaque to human introspection". We have the classical bureaucracy, with its division of labour and procedures executed by flawed, fallible human components. Next, we have the rules-based automation of the 1950s through 1990s, from Expert Systems to Business Process Automation systems — tools which improve the efficiency and reliability of the previous bureaucratic model and enable it to operate with fewer human cogs in the gearbox. And since roughly 2012 we've had a huge boom in neural computing, which I guess is what brings us here today.

Neural networks aren't new: they started out as an attempt in the early 1950s to model the early understanding of how animal neurons work. The high level view of nerves back then — before we learned a lot of confusing stuff about pre- and post-synaptic receptor sites, receptor subtypes, microtubules, and so on — is that they're wiring and switches, with some basic additive and subtractive logic superimposed. (I'm going to try not to get sidetracked into biology here.) Early attempts at building recognizers using neural network circuitry, such as 1957's Perceptron network, showed initial promise. But they were sidelined after 1969 when Minsky and Papert formally proved that a perceptron was computationally weak — it couldn't be used to compute an Exclusive-OR function. As a result of this resounding vote of no-confidence, research into neural networks stagnated until the 1980s and the development of backpropagation. And even with a more promising basis for work, the field developed slowly thereafter, hampered by the then-available computers.

A few years ago I compared the specifications for my phone — an iPhone 5, at that time — with a Cray X-MP supercomputer. By virtually every metric, the iPhone kicked sand in the face of its 30-year supercomputing predecessor, and today I could make the same comparison with my wireless headphones or my wrist watch. We tend to forget how inexorable the progress of Moore's Law has been over the past five decades. It has brought us roughly ten orders of magnitude of performance improvements in storage media and working memory, a mere nine or so orders of magnitude in processing speed, and a dismal seven orders of magnitude in networking speed.

In search of a concrete example, I looked up the performance figures for the GPU card in the newly-announced Mac Pro; it's a monster capable of up to 28.3 Teraflops, with 1Tb/sec memory bandwidth and up to 64Gb of memory. This is roughly equivalent to the NEC Earth Simulator of 2002, a supercomputer cluster which filled 320 cabinets, consumed 6.4 MW of power, and cost the Japanese government 60 billion Yen (or about £250M) to build. The Radeon Pro Vega II Duo GPU I'm talking about is obviously much more specialized and doesn't come with the 700Tb disks or 1.6 petabytes of tape backup, but for raw numerical throughput — which is a key requirement in training a neural network — it's competitive. Which is to say: a 2020 workstation is roughly as powerful as half a billion pounds-worth of 2002 supercomputer when it comes to training deep learning applications.

In fact, the iPad I'm reading this talk from — a 2018 iPad Pro — has a processor chipset that includes a dedicated 8-core neural engine capable of processing 5 trillion 8-bit operations per second. So, roughly comparable to a mid-90s supercomputer.

Life (and Moore's Law) comes at you fast, doesn't it?

But the news on the software front is less positive. Today, our largest neural networks aspire to the number of neurons found in a mouse brain, but they're structurally far simpler. The largest we've actually trained to do something useful are closer in complexity to insects. And you don't have to look far to discover the dismal truth: we may be able to train adversarial networks to recognize human faces most of the time, but there are also famous failures.

For example, there's the Home Office passport facial recognition system deployed at airports. It was recently reported that it has difficulty recognizing faces with very pale or very dark skin tones, and sometimes mistakes larger than average lips for an open mouth. If the training data set is rubbish, the output is rubbish, and evidently the Home Office used a training set that was not sufficiently diverse. The old IT proverb applies, "garbage in, garbage out" — now with added opacity.

The key weakness of neural network applications is that they're only as good as the data set they're trained against. The training data is invariably curated by humans. And so, the deep learning application tends to replicate the human trainers' prejudices and misconceptions.

Let me give you some more cautionary tales. Amazon is a huge corporation, with roughly 750,000 employees. That's a huge human resources workload, so they sank time and resources into training a network to evaluate resumes from job applicants, in order to pre-screen them and spit out the top 5% for actual human interaction. Unfortunately the training data set consisted of resumes from existing engineering employees, and even more unfortunately a very common underlying quality of an Amazon engineering employee is that they tend to be white and male. Upshot: the neural network homed in on this and the project was ultimately cancelled because it suffered from baked-in prejudice.

Google Translate provides is another example. Turkish has a gender-neutral pronoun for the third-person singular that has no English-language equivalent. (The closest would be the third-person plural pronoun, "they".) Google Translate was trained on a large corpus of documents, but came down with a bad case of gender bias in 2017, when it was found to be turning the neutral pronoun into a "he" when in the same sentence as "doctor" or "hard working," and a "she" when it was in proximity to "lazy" and "nurse."

Possibly my favourite (although I drew a blank in looking for the source, so you should treat this as possibly apocryphal) was a DARPA-funded project to distinguish NATO main battle tanks from foreign tanks. It got excellent results using training data, but wasn't so good in the field ... because it turned out that the recognizer had gotten very good at telling the difference between snow and forest scenes and arms trade shows. (Russian tanks are frequently photographed in winter conditions — who could possibly have imagined that?)

Which brings me back to Edinburgh University.

I can speculate wildly about the short-term potential for deep learning in the research and administration areas. Research: it's a no-brainer to train a GAN to do the boring legwork of looking for needles in the haystacks of experimental data, whether it be generated by genome sequencers or radio telescopes. Technical support: just this last weekend I was talking to a bloke whose startup is aiming to use deep learning techniques to monitor server logs and draw sysadmin attention to anomalous patterns in them. Administration: if we can just get past the "white, male" training trap that tripped up Amazon, they could have a future in screening job candidates or student applications. Ditto, automating helpdesk tasks — the 80/20 rule applies, and chatbots backed by deep learing could be a very productive tool in sorting out common problems before they require human intervention. This stuff is obvious.

But it's glaringly clear that we need to get better — much better — at critiquing the criteria by which training data is compiled, and at working out how to sanity-test deep learning applications.

For example, consider a GAN trained to evaluate research grant proposals. It's almost inevitable that some smart-alec will think of this (and then attempt to use feedback from GANs to improve grant proposals, by converging on the set of characteristics that have proven most effective in extracting money from funding organizations in the past). But I'm almost certain that any such system would tend to recommend against ground-breaking research by default: promoting proposals that resemble past work research is no way to break new ground.

Medical clinical trials focus disproportionately on male subjects, to such an extent that some medicines receive product licenses without being tested on women of childbearing age at all. If we use existing trials as training data for identifying possible future treatments we'll inevitably end up replicating historic biases, missing significant opportunities to improve breakthrough healthcare to demographics who have been overlooked.

Or imagine the uses of GANs for screening examinations — either to home in on patterns indicative of understanding in essay questions (grading essays being a huge and tedious chore), or (more controversially) to identify cheating and plagiarism. The opacity of GANs means that it's possible that they will encode some unsuspected prejudices on the part of the examiners whose work they are being trained to reproduce. More troublingly, GANs are vulnerable to adversarial attacks: if the training set for a neural network is available, it's possible to identify inputs which will exploit features of the network to produce incorrect outputs. If a neural network is used to gatekeep some resource of interest to human beings, human beings will try to pick the lock, and the next generation of plagiarists will invest in software to produce false negatives when their essay mill purchases are screened.

And let's not even think about the possible applications of neurocomputing to ethics committees, not to mention other high-level tasks that soak up valuable faculty time. Sooner or later someone will try to use GANs to pre-screen proposed applications of GANs for problems of bias. Which might sound like a worthy project, but if the bias is already encoded in the ethics monitoring neural network, experiments will be allowed to go forward that really shouldn't, and vice versa.

Professor Noel Sharkey of Sheffield University went public yesterday with a plea for decision algorithms that impact peoples' lives — from making decisions on bail applications in the court system, to prefiltering job applications — to be subjected to large-scale trials before roll-out, to the same extent as pharmaceuticals (which have a similar potential to blight lives if they aren't carefully tested). He suggests that the goal should be to demonstrate that there is no statistically significant in-built bias before algorithms are deployed in roles that detrimentally affect human subjects: he's particularly concerned by military proposals to field killer drones without a human being in the decision control loop. I can't say that he's wrong, because he's very, very right.

"Computer says no" was a funny catch-phrase in "Little Britain" because it was really an excuse a human jobsworth used to deny a customer's request. It's a whole lot less funny when it really is the computer saying "no", and there's no human being in the loop. But what if the computer is saying "no" because its training data doesn't like left-handedness or Tuesday mornings? Would you even know? And where do you go if there's no right of appeal to a human being?

So where is AI going?

Now, I've just been flailing around wildly in the dark for half an hour. I'm probably laughably wrong about some of this stuff, especially in the detail level. But I'm willing to stick my neck out and make some firm predictions.

Firstly, for a decade now IT departments have been grappling with the bring-your-own-device age. We're now moving into the bring-your-own-neural-processor age, and while I don't know what the precise implications are, I can see it coming. As I mentioned, there's a neural processor in my iPad. In ten years time, future-iPad will probably have a neural processor three orders of magnitude more powerful (at least) than my current one, getting up into the trillion ops per second range. And all your students and staff will be carrying this sort of machine around on their person, all day. In their phones, in their wrist watches, in their augmented reality glasses.

The Chinese government's roll-out of social scoring on a national level may seem like a dystopian nightmare, but something not dissimilar could be proposed by a future university administration as a tool for evaluating students by continuous assessment, the better to provide feedback to them. As part of such a program we could reasonably expect to see ubiquitous deployment of recognizers, quite possibly as a standard component of educational courseware. Consider a distance learning application which uses gaze tracking, by way of a front-facing camera, to determine what precisely the students are watching. It could be used to provide provide feedback to the lecturer, or to direct the attention of viewers to something they've missed, or to pay for the courseware by keeping eyeballs on adverts. Any of these purposes are possible, if not desirable.

With a decade's time for maturation I'd expect to see the beginnings of a culture of adversarial malware designed to fool the watchers. It might be superficially harmless at first, like tools for fooling the gaze tracker in the aforementioned app into thinking a hung-over student is not in fact asleep in front of their classroom screen. But there are darker possibilities, and they only start with cheating continuous assessments or faking research data. If a future Home Office tries to automate the PREVENT program for detecting and combating radicalization, or if they try to extend it — for example, to identify students holding opinions unsympathetic to the governing party of the day — we could foresee pushback from staff and students, and some of the pushback could be algorithmic.

This is proximate-future stuff, mind you. In the long term, all bets are off. I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as "the last invention humans will ever need to make". But I do think we're going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that's the real threat of AI — not killer robots, but "computer says no" without recourse to appeal.

I'm running on fumes at this point, but if I have any message to leave you with, it's this: AI and neurocomputing isn't magical and it's not the solution to all our problems, but it is dangerously non-transparent. When you're designing systems that rely on AI, please bear in mind that neural networks can fixate on the damndest stuff rather than what you want them to measure. Leave room for a human appeals process, and consider the possibility that your training data may be subtly biased or corrupt, or that it might be susceptible to adversarial attack, or that it turns yesterday's prejudices into an immutable obstacle that takes no account of today's changing conditions.

And please remember that the point of a university is to copy information into the future through the process of educating human brains. And the human brain is still the most complex neural network we've created to date.

929 Comments

1:

The first thing I would add is that AI doesn't just launder out unspoken prejudices - often those prejudices are completely unknown.

As Amazon found with their AI recruitment tool - their entire structure was biased against women but not just in the ways they had identified but even more so in ways they hadn't themselves identified.

2:

That was a great talk, Charlie. Thanks.

As I mentioned at the break, it may be interesting to see how this affects distance learning. E.g. will we see facial recognition used to verify identity in online exams, and will this be countered by real-time deepfake video? Will bots take the entire course and gain the qualification on the student’s behalf?

3:

And then the bots get the jobs?

And in an ideal world this is the point, to free up people's time for something they consider worthwhile rather than for something somebody pays them for.

I also suspect that we don't get that world.

4:

I like this essay, how was it received by the audience ?
How much feedback did you get ?

5:

While I agree entirely with the overall drift of that analysis, I don't think the problem is all that new. We already have plenty of "black box" components in our IT systems. I may have a notion of the general principles of how a modern CPU works, I have no idea what assumptions (possibly unconscious ones) are built into the specific architecture of the chip in the machine I am using -- assumptions which make it vulnurable to various exploits. Similarly, various libraries I use in programming and which are used in various software packages I use are effectively black boxes, which may (and often do!) contain assumptions about what combinations of inputs and context may arise, leading to effective mulfunctions.

6:

Another example of neural network distraction:

Somebody tried to train a neural network to identify skin cancer, and ended up with a neural network which almost perfectly identified rulers graduated in millimeters

7:

It's not new, but it wasn't always the case. Even as late as the end of the 1970s, software was engineered and its maintainers at least identified almost every bug/problem/'feature' they looked into. They didn't always FIX them, of course. But now, software is 'bred' and 'trained' and the objective of the developers is simply to ensure that the worst bugs/problems/'features' do not show up in the new version. They may still be present, but unlikely, or may reappear for some data.

One point that I noted, and would be interested in an expansion of is "90% here today, 9% not here yet but on the drawing boards, and 1% unpredictable. I came up with that rule of thumb around 2005, but the ratio seems to be shifting these days."

I agree with that and, thinking back, it's been true for a LONG time, but don't see a change. Is it really shifting? And, if so, how? What I do see is that it's increasingly hard to identify the 9% because it is buried under so much more crap.

8:

This reminds me of some debates I've had with colleagues about modeling biological systems. Some people want the models to be as accurate as possible. For instance, there's current a model cell that can, in principle, replicate normal cellular processes. The problem with such models, of course, is that no one can actually explain how they arrive at any particular end result.

My argument has always been that this kind of modeling misunderstands the purpose of modeling. A model, as the word implies, is not supposed to be The Real Thing. It is supposed to be a simplified version of The Real Thing, that captures the essence of whatever behavior it is you're trying to understand in a way that allows it to be explained. And, if it's a good model, that explanation will have some predictive application to The Real Thing.

I agree with what I take to be the main point of this essay: that the big problem with AI (or "Machine Learning", in the jargon that is currently fashionable in the Applied Mathematics Department of which I am a member) is the lack of transparency. It produces machines that work (for some values of "work") but can't be explained.

I wonder if there might be a future in modeling AIs? Perhaps one could even develop an AI approach to modeling AIs? There are enough AIs out there for a decent-sized training set. But to produce it, someone would have to produce explanations of them (in some mathematical or controlled vocabulary form). Which, IMHO, might be a useful exercise in itself.

9:

Explainable AI, and more specifically turning large opaque boxes into smaller less opaque boxes is one of the hottest fields in AI right now. See https://scholar.google.com/scholar?cites=8939263267074417112 for the 2587 papers (as of right now, climbing most days) citing the 2014 paper on distillation by Hinton et al., as just one illustration. Your concerns are shared by many researchers.

Charlie makes an important point: we incorporate features of our present society into the systems we build (good and bad features both), and those systems serve to entrench and perpetuate those features. This can be a way to achieve some resilience against excessive institutional change, but it is also a way to ossify broken assumptions into unchangeable rules. (Also, this doesn't just apply to machine learning, although it perhaps appears most acutely there.) I'm reminded of an essay that turns this observation into a somewhat coherent argument against automation, ostensibly wrapped up as a book review of the Dune books: https://www.lesswrong.com/posts/TifG2m7BYW2sGmAoR/leto-among-the-machines

10:

Interesting. Thanks.

11:

Smartphones are technological cannibals, swallowing up every available portable electronic device that can be crammed inside
PROVIDED, of course, that you can get the fucking thing's programmes to work as expected, or even find a set of comprehensive instructions ... grrr.


OK - what is the most recent "peak" number now? Over or under 10 billion?
All this helps, provided we can get past that peak - as it interplays with global warming.

"GAN" ?? General Algorithmic Network maybe?

LAvery @ 8
And let's not even think about simply turning an "AI" loose & letting it "learn" - whatever that means ...
[ Given some outputs of late, which have produced alarming results ]
And/Or allowing it to modify itself by "evolving" as has already been troed ( IIRC ) in electronics, which has come up with some truly wierd ( as well as wired ) cicuits that WORK - but no-one seems to be able to understand hoiw ( or why ) they work.
Now there is real "black box" electromechanical/computing evolution blindly b making its own watches, maybe.

12:

Peak population estimates: see https://en.wikipedia.org/wiki/Projections_of_population_growth (depending on how several critical factors turn out, including possibly some not yet well-understood ones, peak is anywhere from 8.1 billion up to no limit).

13:

XKCD: DNA

While this is true, the really amazing truth is that biology is, in fact, explainable. We do it all the time, both intuitively and professionally. If I ask you "Why does your cat jump on the kitchen counter?" or "What does a heart do?" or "Why did Boris Johnson say such-and-such?", you will probably have an answer. (Ignore for now the fact that 100 different people will give 99 different answers.) There is no simple reason why it had to be. "Explainability" is not one of the variables natural selection optimizes (unlike engineering as done by human engineers).

This gives me hope that taming the opacity of AIs may not be an intractable problem.

14:

Er, no, sorry. The best we can say is that we can explain some phenomena, and will be able to explain more in the future. I have a friend who is a professor of immunology, we were talking about this a year ago, and agreed (on different grounds) that it's not plausible in the foreseeable future.

The following is the fundamental (mathematical) reason:

It isn't possible to build a Turing machine that can predict all emergent behaviour of another, arbitrary, Turing machine (the halting rule is a special case). Gödel's result is more-or-less equivalent. Unless you are following Penrose into quantum gravity woo, the human species and all of its engineered tools is effectively a Turing machine, and (most) biological systems are other ones.

There is a niggle here, which I posted on the last thread. The Turing/Gödel limits do not necessarily apply to sufficiently complicated extensions of the numbers, which MIGHT be used by biological systems - but the smart money is on there being an equivalent limit even for those extensions, and certainly for biological systems.

15:

I want to pick up on the algorithmic targeting of fake news. Maybe this falls into the 9%, but I have a nasty feeling its going to be one of the big ones.

From Wikpedia:

Legitimation crisis refers to a decline in the confidence of administrative functions, institutions, or leadership. The term was first introduced in 1973 by Jürgen Habermas [...]. With a legitimation crisis, an institution or organization does not have the administrative capabilities to maintain or establish structures effective in achieving their end goals

In The End of History and the Last Man Fukuyama argued that history is tending towards liberal democracy because it is the only system of government that is immune to crises of legitimacy; if the current rogues lose legitimacy then they get voted out and new rogues put in their place. This is in contrast with every other system in which the establishment hangs on by its fingernails until forced out by violent revolution, at which point the cycle repeats until democracy asserts itself and becomes the new stable state.

However liberal democracy requires the population to have a common understanding of the world. People might disagree about the best government policy, but they still agree about the broad issues, the shape of the problem, and who won the last election. This used to be true because everyone watched the same TV programs and read a small number of newspapers (and whatever you may think of the way the front page stories were chosen, those papers did not routinely propagate outright fiction). However with algorithmic targetting we are heading into a world in which those things are no longer true because people are being given radically different views of the world, and there are small but growing minorities who have accepted fictions deep into their world views. Charles mentions the antivax movement. Flat earthers are another one, as are the alt-right. Who knows what new kooky conspiracy theory will jump out and become a fundamental part of the world view of some minority.

This can be weaponised, so of course it is being. If you want to paralyse an opponent nation then getting them arguing, or better yet fighting, over whether the sky is blue or pink could be a very effective tactic. If you can get them to do something nationally self-destructive then even better. Confusion to the enemy!

The question then is, can liberal democracy avoid a legitimation crisis, not of the current leaders, but of the concept itself. When most people view most other people as not merely mistaken, or even deluded, but actually malign, how can anything like democracy survive?

China believes it has the answer; control the flow of information to the people in order to enforce the single world view that the government wants. It seems to be working; ask anyone in China about the Hong Kong situation and you will hear a list of government talking points. However this takes us back to Fukuyama; such a system is not stable in the long term; sooner or later it will suffer a crisis of legitimacy and be overthrown in violent revolution.

(Aside: USA Civil War 2 and Chinese Revolution 2 are my big predictions for the next 50 years).

So, what happens to our civilisation if anyone who stands up and says "follow me" is instantly de-legitimised?

16:

Given the hypothesis that liberal democracy is immune to legitimation crises is severely damaged by contact with things like "the opinions of the average taxi driver" ("they're all the bloody same, those politicians") or "an episode of Yes Minister," I suspect we shouldn't waste too much time regarding it.

17:

Paul
China believes it has the answer; control the flow of information to the people in order to enforce the single world view that the government wants.
Exactly the same idea that both the Nazis & the "old" CP ( Like Stalin or Mao ) used ...
We saw how well ( or not ) that worked.

18:

"GAN" ?? General Algorithmic Network maybe?
Generative Adversarial Networks
The generative network generates candidates while the discriminative network evaluates them.[1] The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel candidates that the discriminator thinks are not synthesized (are part of the true data distribution))
There are other descriptions out there that might be clearer to you. TBH it took me a while to understand the details.

19:

Michael2Bec: the audience appreciated it. I didn't get much feedback at the time (I nearly overran my slot), but as the next talk was "a history of Artificial Intelligence" by a professor at the oldest AI faculty in the UK (Edinburgh University), and he kept interjecting "... as Charlie said earlier ..." in his talk, I suspect I stuck the landing :)

20:

Charlie --

Apologies for an unrelated comment; please delete it if you feel it needs to go. I do not see any way to send you a private message, so have to resort to asking the question this way:

Do you consider Laundry to be science fiction or fantasy?

If you prefer to answer outside this blog, my email is ilyatay@gmail.com

Thank you beforehand!

21:

As you say, AI is an immense danger. I'd be totally against it, but I think that it's less of a danger than leaving humans in charge with civilization-lethal weaponry.

Additionally, there are other problems that may be insoluble, but perhaps not with properly coordinated action, but which people seem unwilling of effectively coordinate action against. Global warming is the type example here, but it's not the only one. It's not clear that an AI could solve the problems, but it has become clear that people can't. They'll take short term gains instead. I've seen several examples in various places around the world where resources are accumulated to address a problem, and then siphoned off into private pockets.

As for the form of the AI... I expect multiple variations to show up, plausibly not competitive with each other. On form might well be similar to Athene from Rule 34. Another to the Collegatarch from Alan Dean Foster's I-Inside. Conscious awareness would be hard to demonstrate in most of the forms, but may show up in a few. (It will usually exist, but be very non-human in form. In fact non-chordate.)

The significant problem will be if any of these have non-bounded goals. That could be lethal. And we probably wouldn't notice because of the prevalence of the bounded forms generating lots of social alterations. (I'm counting increased inflexibility as an alteration.)

Calling these AIs (any of them) super-human would be a justified as calling a calculator super-human...and just like a calculator, they'll be able to do things that (just about?) nobody human can do. They won't be as limited as a calculator, but even the more general AIs will have radically different goals. If survival is one of them, it will be an emergent property.

The problem with comparing a corporation with a computer based AI is that corporations are run by people who have many basic understandings that other people share, and computers don't. AFAIKT so far AI programs that learn, learn optimization patterns within a context, and then naively generalize it outside that context. I'm certain this is being worked on now, but with what success is questionable. Additionally, a lot of the things that are called AI in the news aren't, except in the sense that Eliza and Sargon were. I.e. they are pattern recognizers, but they can't learn new patterns. This makes it difficult to realize what the characteristics of current AI programs are. A better clue is give by Tesla's "auto-pilot", or by (whatever the current company name is)'s Big Dog. But again, robots is not where most of the development is happening. I've seen a chart saying that during Alpha-Go's development since first public release it's power has increased dramatically, while it's hardware requirements have decreased dramatically. Maybe that was correct. That's certainly one component of a general intelligence...but I'm not certain how significant a part. Still, that kind of change is significant, and perhaps to be expected as each component is developed.

When HR says "the computer selected candidate X", it's more significant whether the candidate is qualified than whether the selection was unbiased. It's desirable that the selection be unbiased, but bias that isn't worse than the current biases can yield a workable society. And it's probably impossible to come up with selection criteria that are actually unbiased and also yield a candidate that suits the job. (We could certainly do better, but not doing better should not be civilization lethal...or even company lethal.)

That said, if the criteria are not transparent, expect them to be hacked. If they are transparent, expect them to be gamed. People generally seek their own advantage above those of society. When uniform tests were imposed on public education here (well, in California) teachers started being required to teach only what was on those tests. So if AI has determinable criteria, expect people to fake adherence to those criteria...when possible, of course. But if it isn't possible, isn't that when we start leveling accusations of bias?

OTOH, HR has been inscrutable my entire work history. I did not, and in retrospect do not, understand their criteria. This may be a necessary part of their job (though I'm sure they don't know that) to avoid the system being gamed. Or because when people need to work closely together, they work better if they've got more commonalities.

22:

They'll take short term gains instead. I've seen several examples in various places around the world where resources are accumulated to address a problem, and then siphoned off into private pockets.

IIRC this has been studied quite a bit as part of the Free Rider Problem in economics.

My personal experience is that you can predict where the siphoning will happen next by looking at where those in charge have cut oversight and accountability. (Because we know that "the market" doesn't enforce honesty and integrity — if it did we wouldn't need food safety inspection etc.)

23:

I first heard of unknown/unknowns in the 90's. Usually the first 3 of 4 are mentioned:

1) known/knowns
2) known/unknowns
3) unknown/unknowns

Am rather intrigued by the rarely unmentioned

4) unknown/knowns

From a personal level, the unknown/knowns are those tip-of-tongue names of famous people or high school friends -- we know when we are reminded. From a royal "we" perspective, the unknown/knowns are the facts which go unreported.

News organizations used to report on unknown/knowns until it began it became more profitable to report on unknown/unknowables. Where the decline of religion has become supplanted by Faith News.

There is a crypto method of transforming unknown/knowns to known/knowns. Search on "How to play ANY mental game" by Goldreich, Micali, Wigderson. Somewhat akin to John Searle's "Chinese Room" where a system may anonymously produce a solution without knowing the problem.

I wonder what economic incentive would promote the wholesale transformation of unknown/knowns to known/knowns?

24:

Do you consider Laundry to be science fiction or fantasy?

They're fiction. Period.

(I don't consider genre tags like "science fiction" or "fantasy" to be terribly useful; after all, almost all science fiction is actually fantasy when you scratch and sniff -- from the flagrant stuff like "Star Wars" (Wizards in Space!) to more plausible works, there's usually an element of the impossible or implausible in there. All genre tags do is tell the bookshop clerk where to shelve the product with similar stuff.

25:

Thank you! Somehow I thought you would say something like that :)

26:

All genre tags do is tell the bookshop clerk where to shelve the product with similar stuff.

FWIW, the Library of Congress lists Occult fiction and Fantasy fiction as the Form/Genre for The apocalypse codex. The Atrocity archives are classified as Fantasy fiction and Horror fiction. Accelerando is straightforwardly Science Fiction.

The subject classifications are fun. For instance, here's what they have for The annihilation score


Howard, Bob (Fictitious character)--Fiction.
Intelligence service--Great Britain--Fiction.
Husband and wife--Fiction.
Women violinists--Fiction.
Demonology--Fiction.

27:

And these posts are the reason why I love you (in a totally heterosexual way, of course (Cit.)).

28:

I love the analogy between artificial intelligence and corporations/government bureaucracies. But it feels like it either goes too far, or not far enough. I think the latter, and it's going to tak a long comment to explain what I mean.

You wisely restrict the analogy historically to the modern era (i.e., the past few centuries). But if I look at the government of ancient Egypt, it sure looks a lot like what you're describing. And when we go earlier than the past few centuries, our modern-era distinctions between government bureaucracies, religion, and culture become more and more anachronistic (today's scholarly consensus on religion, for example, in the sense we mean it today in Western culture, is that it is a social construct that emerged during the early modern ear in Europe, partly as a way of justifying colonialism, i.e., as a way of justifying governmental and economic actions).

So I would say there's a strong case to be made that you could extend your argument about artificial intelligence back to the governmental/religious/cultural bureaucracies of ancient Egypt, the Zhou dynasty in China, the ANE, etc. Interestingly, I suspect that such bureaucracies were made possible in part by an innovation in information technology, i.e., the invention of written records.

Even if we stick to the modern era, there seems to be a big difference between Qing dynasty China on the one hand, and the squabbling warlords of far western Eurasia, er, the emerging nation-states of Europe. Here again, we can point to at least one innovation in information technology, double-entry bookkeeping, which empowered European corporations in some interesting ways. But another information technology innovation, the printing press, did not take off in China the way it did in Europe, raising the question: what's the difference? And why didn't China exploit many other technological innovations, such as the sailing rig of their junks, which was superior in many ways to the sailing technology of early Europe? There's no easy answer, but one thing that I'd point to is the difference in outlook between Confucian China and Western Christian Europe -- and if you're tempted to call that a religious difference, think again, because although the Jesuits who went to China in the early modern era wanted to call Confucianism a religion, their Chinese interlocutors said no it wasn't (and remember, what we can "religion" is a merely recent social construct).

But still, there's a big difference between the long-term stability built into government/religions/cultures like ancient Egypt and pre-twentieth century China, and Western culture. I think the distinction that Jonathan Z. Smith proposes for religions is useful here, as a distinction between two types of "maps" of the world, the locative "map" vs. the utopian "map." Smith writes:

"One finds in archaic cultures [which use a locative map] a profound faith in the cosmos as ordered in the beginning and a joyous celebration of the primordial act of ordering as well as a deep sense of responsibility for the maintenance of that order through repetition of the myth, through ritual, through norms of conduct, or through taxonomy." [Smith, Map Is Not Territory, 1978]

Although it's not an archaic culture, Chinese government of the Qin dynasty and earlier and its associated Confucianism, with an emphasis on order, continuity, stability, used what looks to me like a locative map. (And so did the Roman Empire, as Robert Silverberg captures in his Roma Eterna series.) After describing what locative means, Smith goes on to describe the utopian map:

"But it is equally apparent that in some cultures the structure of order, the gods that won or ordained it, creation itself, are discovered to be evil and oppressive. In such circumstances, one will rebel against the paradigms and seek to reverse their power." [Ibid.]

This accurately describes Western European, and later North American, maps of the universe: the map that we use, and that we take for granted, is a utopian map: for us there is something wrong with the world that needs changing, so we constantly seek rebellion or reversal.

The problem with AI, then, is not so much the technological/social innovation of AI (or earlier, of corporations, or governmental bureaucracies) but rather AI (or corporations or governments/religions/cultures) in combination with a utopian map. By contrast, with a locative map, i.e., if everyone's major concern was order, continuity, and stability -- then first of all developing the technology that makes bureaucracies possible would be a low priority, and second of all if we humans did develop such technology it would be used to reinforce stability, which we would all like.

To make things more complicated, we Westerners believe in the inherent worth of individuals, a new belief that grew out of the utopian map of the early Enlightenment. And because those of us who are products of Western culture believe strongly in the value of the individual, we also believe we have personal rights. Thus if we don't personally like a religion, we leave; if we feel a government bureaucracy stomps on our rights we take to the streets of Hong Kong; if a corporation uses a new information technology called "AI" that violates people's human rights by being prejudiced in favor of white men, we're outraged. Yet in the last case, the development of that AI is driven by exactly the same kind of utopian as the impulse that make us believe in human rights. This is a big problem with our utopian map: one person's successful rebellion or reversal becomes another person's target for reversal or rebellion.

In short, the problem isn't AI as such, nor bureaucracies nor religions nor cultures as such. The problem is that we both use a utopian map, but your utopia might be my dystopia. This doesn't change the basic argument in your talk, by the way -- it just extends it.

FWIW, as a footnote, Jonathan Z. Smith points out that there are other possible "maps" humans have used. Some human cultures use maps "that are more closely akin to the joke in that they neither deny or flee from disjunction, but allow the incongruous elements to stand" -- I think of these as Terry Pratchet cultures. Other human cultures "play between the incongruities, and provide an occasion for thought" (a few of Ursula K. LeGuin's stories head in this direction). And presumably, over the millennia of human history, there are many other "maps" beyond these....

29:

I was told that Utopia wasn't science fiction by an English master at school. One could add Erewhon, Brave New World, Butler and large chunks of Swift, Conan Doyle and Haggard to the opus of "something that passes the duck test for science fiction, but isn't called that".

And, as you say, the boundary between science fiction and fantasy is a trifle vague ....

30:

The AI-vs-tank story was around at least when I was a Comp Sci undergrad in the early 80's, so it wasn't a modern neural net. My memory of what the story was jibes with the Minsky quote in

https://www.gwern.net/Tanks

But Artificial Neural Nets aren't the only ones that can be led astray by biased training data. In WWII, the Russians trained dogs to run and crouch under tanks. Once they could do that perfectly they were taken to the battle front, in a region where the Germans had tanks but the Russians didn't. The dogs were fitted with explosive vests and released.

Humans are very good at differentiating tanks from trucks: they look nothing at all alike. Dogs are very good at differentiating Russian vehicles from German vehicles: they smell nothing at all alike (diesel vs petrol?). So the dogs faithfully ran to the vehicles that smelled most like what they had trained on, crouched under them, and blew up.

Or that's the story I heard.

"They set a slamhound on Turner's trail in New Delhi, slotted it to his pheromones and the color of his hair."

31:

"They set a slamhound on Turner's trail in New Delhi, slotted it to his pheromones and the color of his hair."

Neuromancer or Count Zero?

33:

Most, not all, but most, of the singularity predictions I have read, fiction or supposedly serious, all suffer from the same "thiefs assume everybody steals" attitude, where they assume the runaway-(A)I behaves incredibly human with respect to both desires and methods.

But with the possible exception of FaceBook, none of the current runaway-AI corporations seem to have realized their own identity or potential as such.

This leads to what one could call the "Who? Me? Singularity", where some our newborn superhuman AI suddenly discovers the fine print in the job description.

It has come as a big awful surprise to most of these companies that people assumed them to be responsible for solving, or at least mitigating very, very hard problems, such as recognizing propaganda, preventing or disrupting mass-psycoses (from dangerous diets to facism) and worst of all, the genericly impossible "do the obvious right thing in this case", where the case is everything from missing children over sex trafficing to 4chan.

(I single out FaceBook, because sound-bites exist which indicates that their Dear Leader fully understands and intends to exploit the potential of his pet-AI.)

34:

The right to human appeal of an algorithmic decision is baked-in to European law, in a directive - the General Data Protection Regulation - which every member state has enacted in domestic law.

Someone was thinking ahead, when that provision was drafted: it deserves to be better known.

35:

> It has come as a big awful surprise to most of these companies that people assumed them to be responsible for solving, or at least mitigating very, very hard problems, such as recognizing propaganda, preventing or disrupting mass-psycoses...

It might be more accurate to say that people assumed them to be responsible for not making those problems significantly worse than they previously were.

That people would object when they discovered those problems have become markedly worse does not seem surprising to me, although it may have been surprising that Facebook et al would worsen them.

36:

But Artificial Neural Nets aren't the only ones that can be led astray by biased training data.

A friend's daughter was going to a private elementary school (I think it was Montessori, but it might have been Waldorf). They used difference colour numbers to help the kids learn mathematical properties — primes were red, non-primes blue or something like that. The kids were supposed to discover a mathematical pattern and the numbers were a prompt.

I wasn't at all surprised when his daughter, who was a bright girl, learned that red numbers couldn't be evenly divided by other numbers but blue ones could — and associated the mathematical property with the colour rather than the primeness. Black numbers she didn't know the rule for yet…

37:

“we incorporate features of our present society into the systems we build (good and bad features both), and those systems serve to entrench and perpetuate those features.”
Very true of transportation infrastructure.

38:

"And/Or allowing it to modify itself by "evolving" as has already been troed ( IIRC ) in electronics, which has come up with some truly wierd ( as well as wired ) cicuits that WORK - but no-one seems to be able to understand hoiw ( or why ) they work."

Half a page of A4 covered in a dense and impenetrable spider-on-acid tangle of gates and interconnections in configurations that make no sense, that against all expectation produces pulses at regular 0.1ms intervals... The evolutionary algorithm explores all of the behaviour space of the circuit elements, rather than the limited region of safe and predictable behaviour prescribed by the datasheets, and can even interconnect them in ways not represented by the circuit diagram or the FPGA's routing logic - not merely relying on a race condition that happens to come out right because this particular FPGA cell has 0.1ns less propagation delay than the next one along, but relying on being able to change that behaviour by dissipating power in some other nearby cell and changing the temperatures of the cells in question.

They are superficially difficult to understand because they are being represented in an inappropriate symbology, that misrepresents the behaviour of elements, assumes a uniformity of behaviour which does not exist, and leaves a whole shitload of necessary information out entirely. But that can all be resolved by doing a bunch of mindbuggeringly tedious and exhaustive measurements on the FPGA to quantify the unspecified behaviour the evolutionary algorithm has found and made use of, and redrawing the diagram in an appropriate manner... as long as they are not too big.

Where they are really difficult to understand is in continuing to keep a handle on how all that information interacts as the scale increases, and that is just as true if you constrain the evolutionary algorithm to stick rigidly to datasheet behaviour and thereby avoid the preliminary step of finding out what behaviour it actually has used. So if you have an evolved circuit that produces 10kHz pulses you can trog through it and figure out how; if it produces the waveform corresponding to "Hello Greg, are you going to ask me some questions again?" in the voice of Neil Pye, you probably can't; and if it starts answering them in the character of Neil Pye, you are almost certainly fucked.

Things like Charlie's racist face recognisers are at that next level up. We know exactly how the basic elements work, with a certainty of 1 in 10 to the lots, but we have no means of discovering that the spiderweb tangle of zillions of them all doing their incredibly-well-defined thing is going to turn out to be racist except by doing the experiment (and finding that the real world does it a lot more thoroughly than the lab can manage).

The problem is that we simply do not have any decent toolkits for handling complexity, and what toolkits we do have tend to embed their own opaque complexity so it doesn't really solve the problem, it just moves it next door. And this applies not just to understanding the behaviour of... well, it's not AI; of stuff called AI, but to designing it in the first place, and indeed also to having a proper idea of what we're supposed to be designing at all. We end up not just with the brute force and ignorance approach, but with recursive brute force and ignorance - "if it doesn't work, hit it with a hammer; if it still doesn't work, use a bigger hammer to hit the first hammer with".

Charlie cites Google Translate being sexist as an example of what goes wrong with this kind of approach, but the same deficiencies cause it to fail on the scale of entire languages. It tends to pick "he" in association with some professions and "she" with others because it's doing a kind of Markovian probability context-sensitive-dictionary thing, comparing sequences of words in one language with sequences that mean the same thing in another language and working out that this word appearing in the source language, given these other words around it, is most likely to correspond to that word appearing in the target language where it has those other words around it... therefore it automatically adopts the sexist bias of typical English usage when translating into English from a neutral language. But another limitation of this technique is that how well it works depends on how similar the rules of the two languages in question that determine word order are. So with French or Spanish, it does pretty well. With German, it gets its knickers in a twist over the big pile of verbs at the end of the sentence and gets all the various clauses they refer to mixed up. With Latin, it basically hasn't got a clue and spits out a bunch of words that may or may not translate any of the Latin words in an order that makes no sense. (There was me thinking that the rigidly logical structure of Latin would make it a natural for machine translation... only the machine translation doesn't know what it's doing so what we get is "slrug eht fo secoiv teews eht vul attoC dna sublaB".)

39:

Hi Charlie, love everything you write (and have lurked here for years now).

A little bit of pedantry: you seem to be using the term GAN for any kind of deep neural network task, but GANs are not classifiers. A GAN is a specific kind of neural network that has the goal of generating new outputs that are like the inputs it was trained with. So, you train a GAN with a bunch of cat pictures, then use it to generate new cat pictures that weren't in the training set; or train it with a lot of SFF stories and it might generate new bits of text that are SFF story-like; etc.

So you might use a GAN to generate new CVs that get past the HR AI, but the HR AI won't be a GAN. It's certainly a deep neural network of some sort, though; probably a Convolutional Neural Network, yes, abbreviated as CNN. GANs are sometimes used to help train deep neural nets when you don't have enough training data (or enough labeled training data) and also in some Deep Fake systems. But they wouldn't be applied directly to most of the tasks mentioned in your lecture.

40:

“governments provide an external environment within which the much smaller rules-based corporations can exist.”
And when the larger corporations grow larger than the smaller governments you get tax havens. I suspect that’s just the beginning, and a whole bunch more “interesting” symbioses and parasitisms will arise in years to come.

41:

"if everyone's major concern was order, continuity, and stability - then first of all developing the technology that makes bureaucracies possible would be a low priority, and second of all if we humans did develop such technology it would be used to reinforce stability"

Well, you don't need a whole lot of technology to make a bureaucracy possible - assuming you have enough food surplus to allow for the bureaucrats to sit on their arses all day, then all you need is some durable method of symbolic record keeping, writing or clay tablets or notches in sticks or Tzimptzon's Individual Stringettes or whatever; once you have the concept of a symbol at all, it's kind of hard not to invent it. And AIUI it was originally invented for (or else extremely soon adopted for) such thoroughly order/continuity/stability-related matters as keeping records for astronomy, calendars and trading accounts. And as noted above, bureaucracies do tend to enforce stability by incorporating fossilised representations of the way things were when they were created.

Pratchett has one of Vetinari's insights as being that, never mind all the shouting and placard-waving they get up to, what people really want is for tomorrow to be pretty much the same as today.

42:

Just consider the old fables. About the jenni who grants three wishes, or the fool who thinks he has a foolproof deal with the devil. They never end well.

We are dealing with powerful, nonhuman entities that do not have our interests at heart. Or a heart

43:

A few disjointed thoughts:

1. If you class bureaucracies in AIs, then the Singularity started a few centuries ago and really got serious around 1950. Prior to, oh, 1600 or so, a good chunk of the world was outside the rule of any bureaucratic state. Even with the Age of Empire, a lot of control was in the cities and roads connecting them, rather less in the countryside (there was an old saying in Burma that kingdoms shrank to the size of cities and walled palaces during the monsoon, because most of the land became impassable to armies but not to people). The technology for seriously assaulting the last refuges of statelessness only really existed since WWII (helicopters especially). Now statelessness only sort-of occurs in places like Somalia, Afghanistan, and similar, although these are notionally labeled as states. Anyway, the singularity is almost entirely complete, and slow AIs have taken over. At least temporarily. I think anyone looking at bureaucracies as AIs would not question that a singularity could be very, very temporary.

B. If computer science is about training AIs and explaining their actions, it will increasingly have the problems associated with sciences like sociology and ecology, which are pretty good at explaining the actions of complex systems, but not so good at predicting future states. That's not necessarily a good thing from an engineering standpoint.

III. There's something to be said for the basic Buddhist truth that life is unsatisfactory, transient, and egoless. The unsatisfactory part should be self-evident with current politics. Transience is that nothing lasts forever (which agrees with basic physics). The egoless notion actually appears to be correct, that there is no "soul" that marks a thing as unique. Every thing is an aggregate of smaller things. Living things are especially frought, as they are metastable processes that depend on the constant intake and expulsion of matter to continue their existence. With organisms like humans, we're utterly dependent on both a whole ecosystem of bacteria within us, and also a bigger ecosystem that includes other humans around us.

AIs, whether bureaucratic or electronic, conform to this view of individuals. SO rather than expecting them to develop individuality or the illusion of an ego, we need to deal with them as the are.

D.4. Just as human systems can be derailed by a wasp sting, or a dose of LSD, or toxoplasma, or rabies, or ethanol, or a peanut allergy, AIs can be derailed by tiny things that cause problems that may not be obvious to their designers. Yes, I do think it's entirely possible that future AIs can make our lives effing miserable or end them all together. However, we've already seen many, many ways that a small number of people (right now, those with a lot of money) can derail rather large AIs. And I don't think that problem's going away.

44:

Indeed, we often can create useful and sometimes surprisingly simple explanations of biological systems. The same would apply to many artefacts produced by machine learning. For instance, many neural networks are really just overcomplicated regression classifiers. However, the skills to perform the analysis are different to those required to build the artefact, and take time, so currently we have lots of opaque gee-whiz boxes which seem mysterious because we don't yet have off the shelf tools to automate the analysis. In contrast, creation is supported by a massive ecosystem.

Interesting things will emerge from this primordial soup. But right now weird monsters are more visible than things like little tweaks suggested by LCZero to the evaluation function of Stockfish to improve its strength markedly. We can see The Weights at https://lczero.org/networks and get a sense of their heft (100MB or so) but the shape of things they represent is still murky.

45:

A silly example:

XKCD.

More seriously, this shouldn't surprise anyone - everything humans create carries their biases with them. From Martha Wells Murderbot Diaries (Artificial Condition):

"I guess you can’t tell a story from the point of view of something that you don’t think has a point of view."

What are we not thinking of, because it's just not part of our worldview?

46:

Dan
The invention of written records .... which were for the purposes of ... Accountancy. At first, anyway.
They had not invented "money" but they had invented accountancy. Um. [ See also: Pigeon @ 41 ]
Printing Press in China .... because of theor ideographic system - actually.
Whereas, in the "West" you only need copies of 26 symbols to write anything at all - simples.
In such circumstances, one will rebel against the paradigms and seek to reverse their power. Too rignt, cobber.
The Greeks were right (again) - stealing fire from the "gods" was what made us different.
As two very different SF authors have noted: H Beam Piper - "Talk-&-build-a-fire?" _ They're sentient!
R Kipling - only Mowgli will handle the Red Flower

EC
"Utopia" was satire - like Swift's work. Except, of course, Swift went on to envision Laputa.

PHK
Hmm - is Zuckerberg an actual fascist, or something else - is he "just" a panopticon-autocrat, for instance, or is there (as yet) no classification, or we haven't looked doen the right rabbit-hole in the past for his label?
I'm reminded of the quote on ultra-protestantism under Calvin: "It was as if all the walls of the houses in Geneva had been turned into glass"

RvdH
That transport infrastructure quote is very reminiscent of the underlying rule(s) in Kipling's "With the Night Mail" & "As easy as ABC"
Nothing shall interfere with free communication or transportation
.....
Also - companies bigger than governments ...
Look up the history of the "East India Company" ( The English one, not the Dutch) - &/or a recent book by ... Wm Dalrymple on the subject.

47:

Hmm - is Zuckerberg an actual fascist, or something else - is he "just" a panopticon-autocrat, for instance, or is there (as yet) no classification, or we haven't looked doen the right rabbit-hole in the past for his label?

That one's easy: Zuckerberg is a billionaire -- the archetype of the subspecies of multibillionaire unique to the 21st century, whose wealth is predicated on disintermediating human relationships. Jeff Bezos of Amazon is of the same type, but more specialized -- he only disintermediates commerce and retail relationships. Zuckerberg, however, relies for his wealth on the ability to monetize the panopticon, which makes him incredibly dangerous to civil society.

48:

Charlie
So he's as dangerous as Jean Calvin, as I suspected, yes?
Using money rather than relgious blackmail, but the same ethos - control.
How do the AI's called "governments" deal with people lik Zuckerberg, then?

49:


Am rather intrigued by the rarely unmentioned

4) unknown/knowns

I like that observation!

However, I define it differently: "Things that we know, but don't realise we know"

Consider that an (almost) all-white, all-male, all Oxford-educated upper caste of First Division civil servants might be sort-of-aware, collectively, that they are not as inclusive as they should be.

They may even be aware that they can and should do better, and start addressing some of the overt behaviours and rules that they can see are barriers to recruitment and the career progress of outsiders.

But there are a number of barriers - some of them very simple but, like privilege, invisible from above - and other barriers which are subtle and complex but consistent (and highly effective!) patterns of responses and behaviours that could be modelled as algorithms and brought to light.

These are things that the system does, but doesn't know that it is doing.

Obviously, every coherent and definable function and functor within a non-sentient non-self-aware system is an 'unknown/known': but we're talking about people here, and *discoverable* logic.

An interesting speculation:

All societies, organisations and communities have 'unknown/known' embedded logic that maintains their cohesion and (say) protects against in-breeding, manages difficult individuals, mitigates contagion, allows emergency responses...

...And there are also self-destructive patterns of behaviour that haven't yet evolved-out: embedded logic that encodes failure modes, which can be discovered by careful statistical analysis by a hostile outsider's AI algorithmic rules engine.

The formation of hate groups and fact-resistant cliques of cranks is one of these anti-patterns: I do not doubt that there are others - some of them impossible to describe in human language - and I suspect that Cambridge Analytica's successors know a great deal about them.

Plus, of course, there are things we do actually know about - racism, sexism, age discrimination - which are often unseen, but are discoverable and knowable.

Discoverable, that is, for people who want to know.

I am pleased to hear that Amazon is making good use of the AI 'failure' that gave such unwelcome discoveries about the recruitment of women and non-white engineers: other organisations have 'we-don't-to-know/knowns' that they pay Facebook to codify in 'smart' algorithmic recruitment campaigns to find candidates that are 'a good fit for our dynamic culture'.

Those algorithmic campaigns are, of course, absolutely and provably not racist by design, nor by intention; likewise, there is no intention nor discoverable algorithmic logic ensuring no-one over 50 ever sees their job adverts.

Legally they're in the clear, except in jurisdictions with a law against *'indirect discrimination'* that measures the discriminatory and prejudicial effect, regardless of the stated intention. And that matters, because the stated intention is often well-documented and legally-watertight, so that the bad mechanism is, in law, unknown and impossible to prove.
.
.
.
.
...And the very simple data practice of statistical analysis is effective against toxic AI, if we're prepared to use it, and if it is observable in large populations and large-ish organisations.

A very smart algorithmic campaign can, of course, attack an 'atomised' population of differing subgroups that are impossible to aggregate for the collation of statistically-significant data.


50:

Actually, no - Utopia was not satire, but a revolutionary philosophical tract - it wasn't mocking anyone, which is the definition of satire. Traditionally, both satire and such tracts have often been written in the form of science fiction or fantasy, to avoid attracting the ire of TPTB and the attention of government employees wearing masks.

51:

"The evolutionary algorithm explores all of the behaviour space of the circuit elements, rather than the limited region of safe and predictable behaviour prescribed by the datasheets, ..."

Actually, no, and that myth is right at the heart of why so many people misunderstand evolution, intelligence, AI and computational complexity. Such methods will find only solutions that are connected to the initial conditions in a way that I describe below. If a better solution exists elsewhere, it will be found only by the application of genuine intelligence or an extremely unlikely fluke.

What both evolution and those algorithms do is to take a (often random) set of initial conditions, and search their neighbours for better ones. They will thus behave like water, flow only downhill, and get stuck in the first sink they come to. That problem is partially alleviated by adding various forms of randomised jumping, but the space of possible solutions is too large for that to enable them to search more than a very small proportion of it.

Those techniques were extensively studied in the 1960s, and those problems were known then; nothing has changed. I was a mere student, but dabbled with them then, and have been watching the modern claims with disgust and despair. No, they are NOT new and, no, they do NOT do what the proponents claim.

This has consequences:

The more that we focus on automation, bureaucracy and 'rules', the less potential for radical progress we have. None of those inventions by AI have been more than minor, nor are any likely to be.

We could perfectly well have AIs with any particular properties we want (e.g. explicability, fail-safe etc.), but we have to engineer those in. And, currently, the political will is not there.

52:

However, the skills to perform the analysis are different to those required to build the artefact, and take time, so currently we have lots of opaque gee-whiz boxes which seem mysterious because we don't yet have off the shelf tools to automate the analysis. In contrast, creation is supported by a massive ecosystem.

This is true in biology as well. Humans don't (until recently) design living things. They walk (or slither or blow) into our lives, and we try to figure them out. The way we do that is not very similar to the way they developed through evolution. (Even though we do it using tools that evolution gave us.)

In the case of machine learning, it HAS to be useful that we know in some detail how the machines were built, at least in the sense of knowing the overall structure of the machine, what algorithmic process was used to optimize its parameters, what environment it evolved in (which is to say, what the training set was), and the objective function. That is to say, although it is true that "the skills to perform the analysis are different to those required to build the artefact", there is a useful relationship between them.

53:

I've been musing on the title of this essay, particularly the contrast "Threat or Menace". What distinction is being highlighted here? To me the words mean very nearly the same thing (with some subtle shades of difference). The essay itself uses the word "threat" or its derivatives several times, but "menace" appears nowhere except the title.

54:

Isn't telling whether a school is Montessori or Waldorf best done by counting how many nuts there are in it?

55:

It's a joke, dude.

The subject of talks in the IT Futures conference this year was the role of AI in the future of Edinburgh University, with talks on everything from AI-assisted captioning of courseware for the deaf/non-native speakers, to the use of simulated or remote-controlled chemistry labs in MOOC content delivery, by way of how the law deals with algorithmic processes. Not so much teaching about AI as deploying AI as part of the business process of an institution which turns out to have close to 15,000 staff and 50,000 students this year (wikipedia is out of date).

I was there to give an opening keynote as an icebreaker/talking point, so decided to go full-bore cautionary in case there was a bias towards pointy-haired administrative optimism -- but it turns out most of the speakers were on the same wavelength already.

56:

I haven't had significant contact with that bunch, or any at all in several decades, but it fails to surprise me. Edinburgh was one of the originators of that area, and has a history of keeping its head in the clouds but its feet on the ground. I am sure that it was an extremely interesting meeting - I must check up and see if the talks are published.

57:

I've been musing on the title of this essay, particularly the contrast "Threat or Menace". What distinction is being highlighted here?

It's something of a set phrase/cliche.

https://en.wikipedia.org/wiki/Threat_or_menace

58:

"Threat or Menace". What distinction is being highlighted here?

Specifically it's a reference to the Spiderman comic books. Supporting character J. Jonah Jameson, a tabloid editor, famously kept running editorials of the form SPIDERMAN: Threat or Menace? (Spiderman is a loony who climbs around the outside of buildings in his pajamas, right? It's not as if he can sue. Besides, stories about these super people sell papers.) One might think such sensation mongering would get old after a while or that people would notice that he's repeating the same line of bull for years on end; out in the real world we have the counter-example of how long people have been happily wallowing in myths about Hillary Clinton.

59:

Nile
Very slight correction: ... nor discoverable algorithmic logic ensuring no-one over 5035 ever sees their job adverts

EC
Disagree ...
Of course, More changed sides, later, & turned into a murdering persecuting bastard. Only to have the paradigm removed from under him, leading to his own demise.
Called "karma" I believe.

60:

https://en.wikipedia.org/wiki/Threat_or_menace

Aha! Thank you for introducing that meme into my consciousness, where it will undoubtedly reproduce and produce thousands of little incomprehensible meme children, to the consternation of my nephews and nieces.

61:

One talk was a quick history of the AI field. Ed Uni was one of the first three universities in the English-language world to conduct AI research (along with Stanford and I-forget-where-else in the US): it's the leading CS department in the UK, if not Europe, right now.

62:

Ed Uni was one of the first three universities in the English-language world to conduct AI research (along with Stanford and I-forget-where-else in the US) (highlight added).

MIT, I believe.

63:

AI: Threat or Menace?

Or Menacing Threat?

64:

4) unknown/knowns

Explained by a phrase from earlier in American politics: "plausible deniability". Something that you can plausibly claim not to know, so can't be held responsible for consequences deriving from it.

I don't know if it originated in politics or is a transplant from business. A lot of businesses, especially the prominent tech businesses, seem to put a lot of effort and resources into not knowing inconvenient facts.

65:

I don't know if it ["plausible deniability"] originated in politics or is a transplant from business

Once again, Wikipedia(*) comes to the rescue:

https://en.wikipedia.org/wiki/Plausible_deniability

(*) Which cannot be doubted.

66:

MIT, I believe.

Although CMU (Carnegie-Mellon University in lovely Pittsburgh, Pennsylvania) was also a strong contender, with a focus on robotics.

67:

to DMPalmer @30:
But Artificial Neural Nets aren't the only ones that can be led astray by biased training data. In WWII, the Russians trained dogs to run and crouch under tanks. Once they could do that perfectly they were taken to the battle front, in a region where the Germans had tanks but the Russians didn't. The dogs were fitted with explosive vests and released.
This is some sort of urban legend as well, because even though something like that existed IRL (like the chicken bomb detonator), it was found ineffective rather than not working as intended - you should train your dog for at least several months to achieve the reliable result in a heat of battle - not several weeks.

That's the interesting thing about "unknown/known". What the humans think and know is not what the dog thinks and knows. The dog is effectively is placed into it's own virtual world, in which the food that is needed for it's survival is hidden under the tank ahead of it (probably indicated by some sort of verbal cue by the tamer). It does not know from its experience that the thing that is placed on it's back now is a live bomb that will blow up the tank and itself. But this is a human survival strategy nevertheless. By a corollary, this sort of relationships may even be a survival strategy for the dogs in general - they are here because they are useful to us.

As such, I like to view a human society as a large clump of neural networks constantly working on their survival strategies in complicated world. And he most interesting thing about it is that even though most of this society is immersed into situation where they don't really think what they are doing and only follow their routine, it still works. As if it is not a natural, nor it is an artificial intelligence - it works because people artificially create systems to order each other around and remain in synergy at the same time. They have a protocols of communications, systems of trust and responsibility and methods to create and control so many things at once. The fact that this "mechanism" has such a degree of control with only handful of modes of communication is simply the most astounding thing about the civilization.

I remember reading not-so-famous series by some Japanese author, since I was very impressed by it - quite long time ago. It is about AI as much as it is about concepts of battle, military intelligence and subversion. It is a dilogy and also a 5 episode series of impressive quality - and many artworks associated with it.
https://www.amazon.com/Yukikaze-Chohei-Kambayashi/dp/1421532557
https://www.amazon.com/Good-Luck-Yukikaze-Chohei-Kambayashi-ebook/dp/B005CXHYHC
Despite the fact that it uses totally obsolete view on organizations and computer or network architecture, despite the very inconclusive end of the series at large, it has some very interesting attempts to explain logic and thoughts various characters, some of which are humans, or machines, or the alien-something that is never really explained or described in detail. Especially the part where they try to recognize between each other who is the enemy of which.

69:

to Paul @15
The question then is, can liberal democracy avoid a legitimation crisis, not of the current leaders, but of the concept itself. When most people view most other people as not merely mistaken, or even deluded, but actually malign, how can anything like democracy survive?

It can't. It simply does not. If liberal democracy is about rule of liberal democrats, it sooner or later turns into their exclusive playground, refusing to hear from anyone but themselves, testing their legitimacy. If it is about ideology - well, too bad, because it will be challenged all the time by those who disagree with it but cannot be cast aside. If you chose both options, of course, you will get both outcomes? and this is what happens all the time, ironically.

Same thing with dictatorships, btw, because any sufficiently advanced democracy isn't really distinguishable from dictatorship (as in, say, marxist definition of dictatorship of the capital) in the face of a crisis - it only becomes obvious when the system cannot keep up with the changes around it. I think transition from Weimar Republic to Thousand-year Reich should have demonstrated as much. But oh well, it did not, for so many people.
https://www.dw.com/en/vladimir-putin-condemns-eu-stance-on-nazi-soviet-wwii-pact/a-51636197

Greg Tingey @17:
Exactly the same idea that both the Nazis & the "old" CP ( Like Stalin or Mao ) used ...
We saw how well ( or not ) that worked.
It worked until it stopped working. Some of these worked better than others. Lots of fascist regimes in sheep clothes existed and still exist to this day, hidden in the shadow of bigger powers. As it turns out, when we talk about systems more complex than a clock or several lines of code, it becomes a question of ideology and personal preferences rather than question of science and experience.

Chinese "answer" is a perfect example of short-term solution which may or may not be modified in the future and thus may or may not run aground, amok or in various trappings nobody really knows yet. We can only wander what could the next problem be, but sure nobody is going to be left behind. May I remind you that the dictators of 20th century are never popping out of nowhere - they are consequences of destruction caused by much grander tragedies like disappearing of an empire or a civil war. People always prefer known fears to the unknown instability of anarchy. People always remember when the great powers and their great promises resulted in disappointment, their idealistic view of democracy was tainted by the crimes this "democracy" committed against them.

70:

We are talking about the 1950s and early 1960s, here, and my understanding is that CMU's robotics work is MUCH later.

https://www.ri.cmu.edu/about/ri-history/

71:

It's not hard to imagine "Zuckerberg sells blackmail information to Putin, society falls" as the way history will eventually be written.

72:

Re: AI hiring tools - multiple-choice (scaled) self-admin test

Don't understand why such corps would bother with using a CV-reading AI when all you need is a detailed job description and requirements presented/filledout as a bunch of multiple choice rating scales. Each candidate would also be assigned some unique ID. Sort through all of your candidates' application data and only then contact them to request further non-job related info such as age, gender, ethnicity, etc. A check on job description bias could be done at this point -- specifically, identify which questions ('requirements') skewed the candidate selection.*

The human interview could be the final step in the hiring process although considering the accelerating rate of employee turnover in the larger corps (the only outfits that could afford this tech) ... meh, why bother: individual employees are interchangeable cogs in the great corporate machine.


* This would also be consistent with how most institutions of higher learning admit their student.

73:

”Those techniques were extensively studied in the 1960s, and those problems were known then; nothing has changed”

Minksy’s critiques from the 60s did not really address back/propagation in multi-layer networks. If they are what you are thinking of.

But it is odd that we are calling GAN “A.I.” These are all highly specialised networks trained for very specific tasks. They are each extremely efficient pattern-marchers for a given type of pattern. They are not in any way General purpose A.I.

74:

But it is odd that we are calling GAN “A.I.” These are all highly specialised networks trained for very specific tasks. They are each extremely efficient pattern-marchers for a given type of pattern. They are not in any way General purpose A.I.

You may not have noticed this but there's currently an industry bubble/gold rush in project, and "AI" is the new "Cyber". (Shudder.)

Back in the 80s it was Expert Systems. In the 90s it was hypertext and search; in the 00s it was social networks.

75:

You may not have noticed this but there's currently an industry bubble/gold rush in project, and "AI" is the new "Cyber". (Shudder.)

Industry and Academic. I think it was about two years ago that I reached the point of averting my gaze whenever it fell on the phrase "Machine Learning".

76:

You may not have noticed this but there's currently an industry bubble/gold rush in project, and "AI" is the new "Cyber". (Shudder.)

Does it have nanotubes and use blockchain?

77:

Charlie, the ideas in your talk are calling to mind an SF story from 1997: "Billy's Bunter" by Walter Cuirle. (Published in the U.S. in "Analog", and probably elsewhere.)

Running off of memory, but in the story, a school-age boy carries an AI device from his Dad that he calls a Bunter. It's to listen to Billy's surroundings and give suggestions that help him socialize, since he's had problems and been bullied for it. It's supposed to only know what he knows, but drinks it all in and makes the connections faster. At one point, struggling with a question in class, the device whispers a clue that suddenly inspires a new answer in him, and he raises his hand with confidence.

...And gets in trouble for his answer. You see, he lives in a future U.S. that isn't quite Handmaids Tale, but half-religious with hi-tech, and some freedom of belief but strong social pressure. The culture-wars split in US society ended with a split in different kinds of schooling you could choose among. But even in a "Traditional" school (vs the encouraged religious schools), there is strong social pressure to conform to authority (even bullies), with an implied immoral, scary world outside the country. Originality, and certainty creativity, not encouraged. (The analogy of "satellite dishes in Iran" was, I believe, specifically mentioned in the story. As an aside, the story even predicts the accursed "Smart" TVs with the father grumbling about bootup & ads just to get to a decent video to play for his son.)

Anyway, in that story, the handheld AI device in Billy's hand was shown as a positive thing, a solution to help Billy create and learn. It ends, not with a revolution, but with the founding of one of the third legal type of school, a "free" school (any curriculum you want, but none of the accreditation) where all the students including Billy have a Bunter AI to help them. Free in their own little world.

But they're still stuck in that culture. Anyway, that story made an impression on me, since it wasn't a full-on "1984" or "Handmade" world, but showed a slow, plausible creeping in that direction. So many elements of that story have been coming closer since I read it. And if there was any downside to, say, having Bunter AI technology in the hands of the right-wing rulers, I don't remember it mentioned.

78:

That's not actually incompatible with the point I was making. Perhaps I should have said "the evolutionary algorithm sets out to explore..." (as in, the starting point is random) "...while the exploration is limited by inherent deficiencies of the algorithm such as the tendency to get stuck in local minima, it does not have any constraints relating to sticking to the region of safe, predictable behaviour characterised in the datasheet". The evolutionary algorithm can - and does - "measure" uncharacterised performance parameters of individual elements on the chip and use the results to influence its larger-scale behaviour. But the representation that we have of the evolved circuit is still just an ordinary circuit diagram (expressed as an FPGA program), which is no longer adequate when you're doing such weird tricks; it misrepresents some of the information and misses a whole lot more out entirely. So these circuits are artificially hard to understand because the circuit diagram does not actually make sense (and similarly, you can't rely on putting the same program into a different FPGA and still having it work the same).

What I'm saying is that that kind of difficulty is not actually the problem here. That problem is solvable, even if it is a huge pain in the arse, by making a horrendously tedious bunch of measurements on the FPGA to acquire the missing data. The problem we have here still exists even when the basic elements of the machine are very well characterised indeed, and is not (currently) solvable because we don't have the basic toolkit to understand huge numbers of interactions (and what we do have tends to just recursively shift the opacity somewhere else without actually getting rid of it).

Your post #14 seems to suggest that we may fundamentally never be able to develop a general toolkit, or at least not unless we come up with a fundamentally different approach. On the other hand "fundamentally impossible in the general case" covers a lot of ground and doesn't mean we can't get results for more specific cases (eg. 10 PRINT "FUCK THE TORIES": END halts but 10 PRINT "FUCK THE TORIES": GOTO 10 does not). So I don't despair of the possibility of keeping a handle on things by developing tools that do work for a useful number of cases and not developing systems that exceed the capabilities of the tools to check them. I just despair of the will to do it, given the current popularity of the approach of going live with some half-arsed piece of crap and hoping people don't notice it is crap or can't complain effectively if they do.

79:

Sorry, I was unclear. I meant I didn't know where the concept of plausible deniability originated.

80:

Multiple independent originators, all of pre-school age (though it takes them a while to get the hang of it).

81:

why bother: individual employees are interchangeable cogs in the great corporate machine.

I've seen evidence that at least some of them work on a "hire, then let them transfer internally", since they know that they are bad at hiring. Sadly that also encourages the development of undesirable ecological niches. Most obviously the "handmaids" and "racially pure" ideological ones, but also intellectual dead zones.

It is funny in a way watching very smart people{tm} struggle with scaling problems. All the problems that they'd like to fix by "go there and look at it, enforce my preferred solution" stop working when you have even 1000 employees. For some reason I'm reminded of wild outposts like Greenpeace New Zealand and The Greens NSW both of which were founded fairly independently of the rest of the organisation and have/had their own ways of doing things. When the borg arrive to encourage consistency some people leave rather than comply.

That sort of internal diversity can be incredibly valuable, but there can be real ethical issues with allowing them to not use best in class solutions. But again, when you have delegated authority how do you get the delegates to enforce the same rules in the same way if they believe the rules are bad rules? Especially when you *want* charismatic leaders even though they're hugely more likely to be arseholes.

82:

Since everything was "cyber" once upon a time, I suppose we'll se "ai" become a prefix, as in aispace instead of cyberspace.

That said, how would you pronounce aispace. A.I. Space, or aiiieeeeespace?

83:

There's a couple of other "fun" problems with training set data.

The first is _deliberate poisoning_ of data. If an adversary can taint the training data then it's possible they can mis-train your AI to their benefit.

The second is _inadvertent poisoning_ of data. We see this in the Cyber Security space a lot; tools get brought in and you train them on your data and they'll be able to detect anomalies. But what if the anomaly is already present (an attacker already has a foothold and is exfiltrating data)? This gets learned as normal behaviour and your new system now has a blind spot.

Curating training data is gonna be a whole new speciality; my prediction for the 5% space :-)

84:

In space, nobody can hear you aiiieeeee.


(You're not going to claim you weren't playing for that, are you?)

85:

If you want see this language in action, "Existence", by David Brin, is full of it; "aintelligence", "aissistant", "aivatar" and so

(and Our Gracious Host gets a quote reference in it :-))

86:

“internal diversity can be incredibly valuable, but there can be real ethical issues with allowing them to not use best in class solutions.”
One reason that diversity is valuable is because the solution the “borg” settled on isn’t necessarily best-in-class.

87:

best in class solutions

Got an adjective, an adjectival prepositional phrase and a noun there in just four words. I've been in analogous situations where involved people had wildly different conceptions of what any of those meant.

88:

Best in class is a corn monoculture.

Optimized for local conditions, inputs, and required outputs might be a vineyard or a vegetable garden.

The problem is that, without putting some referents and scale markers on defining "best," you're likely to get whatever standard operating procedures defines as best. And since bosses tend to simplify so that they can get their heads around the problem, that massively constrains the possibility space.

Actually thinking about it, this is a huge problem with AI: simplifying the system so that the human in charge can understand it well enough to make useful decisions. This is a classic problem with bureaucracies and supply chains. If we leave humans in charge, then no matter how brilliant the AIs are, most of their possibilities are going to be pitched because no one can understand how they work well enough to evaluate them.

Putting the AI in charge means that you understand the model it's using well enough to trust that it has your best interests at heart. Again, this means throwing out any possibility that doesn't make sense...

I guess the only way around this is to bring in AIs as consultants to management. Oh dear.

89:

There is actually very little automated CV reading going on, having worked on a few such attempts the sad truth is that the things written on a CV have very little correlation to someone’s suitability for a job, other then the basics of education or prior work experience. And you hardly need a complex AI for that. The content of CV’s is mostly useless trash

Which is the flaw in a lot of what Charlie is asserting, you need data to do anything. The main reason Facebook works is not the clever AI but the fact that Zuck has managed to get the user base to provide him with a ton of high quality data, which when combined with external data sources made it easy to target ads to them

So if you want to see where AI is going to be big in the future you need to look at where massive amounts of high quality data is being collected now. Note the word “high quality” it currently doesn’t apply to things like surveillance cameras . But it does to things like Alexa

90:

@76 - “Does it have nanotubes and use blockchain?”
Hell, my company has it with nanochains and blocktubes. Does the whole Dymaxion CyberNeuroExpert thing really well. Provides ongoing best-in-class optimaxualisation of resource matrix acceptivity parameters for whole-problem visualization and transitivual solutionising.

91:

The first version of the training data story I heard was a network which appeared to have learnt the difference between NATO and Warsaw Pact planes. Until someone realised that the training data (and initial successful test data) came from a book that presented pairs of planes, NATO on the left, Warsaw Pact equivalent on the right. And all the left hand photos had been chosen flying left to right, all the right hand photos flying right to left.
Other versions have tanks in woodland vs tanks on plains, or even colour vs black and white photos.

Another urban legend has the UK Home Office back in the 70s or 80s trying to use a rules based expert system to make immigration decisions. They allegedly gave up when they couldn't make it match past human decisions without an explicitly race based rule, which the human decisions weren't _supposed_ to have used as a factor.

Back to tanks, allegedly one breakthrough in designing fire and forget over the horizon anti-tank missiles was realising that you don't have to recognise tanks in completely 3D arbitrary orientations - if the tracks aren't flat on the ground, it isn't a threat. (Again, algorithmic, not ML or neural network - for Warsaw Pact tanks crossing from East Germany.)

92:

"Best in class" is horribly vulnerable ( Like orders of magnitude more vulnerable) to an OCP. Than even normal people, scrabbling fo a solution to said OCP.
Because people are so used to thinking (excuse me) "inside the box" of the class.
IMHO, this is one of the main reasons ( aprt from running out of money) that the Communist systems collapsed completely.
Thinking outside the class got you the Gulag.
And, now communism is gone, it looks as though the USA is making the exact same mistake, unless they can get rid of Trump & the "R's" next year ....
... Heteromeles on this same subject: ....
Leading to the inevitable effect in so-called "education" of "Teaching to the Test", rather than teaching to the "Real_World"TM

timrowledge
😍
The semi-random bullshit generator turned up trumps, there, didn't it?

More generally, I wonder if part of the problem is the reverse of the Japanese quality-control paradigm.
Your "limits" will probably be set at 3 or 4 SD's in a normal bell-curve.
What happens when a result appears that's 5 SD's out?
And, it WILL happen at some point, & you don't want to ignore it - because there is your Black Swan, so to speak.
See also the early Pterry: "the Dark Side of the Sun"

93:

So for the foreseeable AI future people should learn to ask themselves the question: 'Do we trust this (depending on the AI level) cockroach/mole-rat/pride of lions/pet to make the right decision for us?'

Is anyone working on AC (Artificial Common sense)?

94:

> unless they can get rid of Trump & the "R's" next year ....

Unfortunately even if they can be voted out of office, they're not going to quietly go away (and the structure of the systems means that in Congress the best you can hope for is a comfortable majority for those who appear to have an ideology beyond "we should be in charge because we should be in change).

95:

I see what you mean, but it's still somewhat mistaken. From an engineering viewpoint, there are several levels of 'possible', such as:

(1) Sign a moderate cheque, and it can be done within a reasonable timescale.
(2) We are pretty sure it can be done, but the bill might be horrendous and the timescale excessive.
(3) It could be done in abstract theory, but almost certainly not within the lifetime and resources of the human species.
(4) It can't be done, even in abstract theory.

Exhaustive searching of the possibilities of something of the scale of a modern CPU chip or software application is either (3) or (4), depending on the question you ask. So the smart thing is not to go there.

"Your post #14 seems to suggest that we may fundamentally never be able to develop a general toolkit, or at least not unless we come up with a fundamentally different approach."

Never. But, as you say, that doesn't mean we can't come up with something usable. The solution is as I described in the last paragraph of #50, which is possible in sense (1) above:

We could perfectly well have AIs with any particular properties we want (e.g. explicability, fail-safe etc.), but we have to engineer those in. And, currently, the political will is not there.

This is horrendously common with 'technological' problems - they are usually soluble, often fairly easily, considered as a classic engineering problem (including biology etc.) - but the chosen political and social contraints mean that they are insoluble.

96:

nanochains and blocktubes

Nanochains are a real thing, of course: polymers such as DNA, RNA, proteins, morpholino oligomers, etc. Once a subject of corporate buzz, but now past their sell-by date, alas.

My brain refuses to provide me with a visual for blocktubes this morning, though.

98:

https://www.google.com/search?q=kidney+stone&hl=en&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiAjMbr1LfmAhU

Indeed, it is easier if you think of block as a verb, rather than a noun. (Didn't occur to me. I just got up and there's still an excess of blood in my caffeine-stream). But these images are not very attractive for inserting "blockchain" into the corporate bullshit memestream.

99:

From the opposite bank of the jargon memestream, there is this report from journalist Molly Ivins, speaking of a Texas politician:

His television advertisements proudly claimed, “He’s tough as bob war”.

Even for someone who lived 20 years in Texas, this takes some decoding. Ivins helpfully adds, "bob war is what you make fences with".

100:

Is there an equivalent of Poe's law for corporate bullshit? If not, there should be one.

101:

Re: ' ... my company has it with nanochains and blocktubes. Does the whole Dymaxion CyberNeuroExpert thing ...'

Gee, sounds familiar. Hmmm ... I think your outfit presented to us last year. (Just pile on some more of that zero-content technobabble whydontcha.)


A question to the folks here that actually work in this area:

How do you test for errors? Don't recall any genpub (non-techie) news stories mentioning this detail. The few times I've sat in on dept-targeted sales pitches, the stock 'answer' was: it's a trade secret (until you sign on the dotted line usually several years' worth of some font-4 irrevocable contract that includes an NDA that binds your past and future employees' great-great-great-in-laws, etc. - and of course after your last payment check clears).

102:

Thinking about the whole "Threat or Menace" thing, I'd guess that the greatest threat/menace is that some psychopathic, right-wing authoritarian leader of the Altemeyer type* gets it into his head that it's possible to use AI, probably in conjunction with quantum computing to break security codes, to beat a conventional military using cyberwar, excuse me, AIwar. There are two problematic outcomes, and the rather worse one is that the AI attack fails, prompting the target into nuclear retaliation.

The more unpredictable one is if AIwar wins. I think it's possible that this is the way we get serious action on climate change, simply because, the biggest users of petroleum (conventional militaries) are shown to be ineffective against the new threat, so there's leverage to drastically downsize the Big Iron Boys. Also, a number of cities undergo drastic population drops after AIs borked their infrastructure controls and caused their power plants to blow themselves up. This will probably lead to some drastic retooling, which might conceivably be more carbon neutral.

Alas, I suspect that the internet would also be a casualty of an AI war, since cutting the cables is about the only way to stop this from happening again. This would lead to an era of paranoid nationalism, which wouldn't be good for cooperation on climate change. On the other hand, we more-or-less managed the Montreal protocol on CFCs in 1987, while we've so far utterly muffed controlling greenhouse gases in a completely networked world. I don't think the internet is necessarily vital for controlling greenhouse gases, based on this (probably incomplete and faulty) logic.

103:

"My brain refuses to provide me with a visual for blocktubes this morning, though."

Blocktubes are the high-bandwidth buses used for inter-processor communication in computation modules using a virtual super-network of clusters of emulated BBC Micros running on a self-extending fractal anthill architecture. They allow the processors in one sub-cluster to exchange data at high speed with those of another, like the interconnections between transputers. So basically think of a big pipe full of ants.

104:

I reckon that what that would show is that while conventional computer-dependent militaries are ineffective against the new threat, conventional militaries that aren't computer-dependent don't even notice it exists. And one thing about pre-computer military hardware is that you can build shitloads of it very fast. So say the US manages to knock out all the computers in China, including the ones that work the nukes. You then get a very pissed off China building WW2 style kit (petroleum-powered, of course) for all it's worth and able to handle the casualty rates involved in using it, vs. a US struggling to get its head round the new situation and doing its usual freakout over the fact that when you go to war some of your own side are going to get killed. Wherever the actual fighting takes place, the result ends up much like Vietnam except bigger and worse. I don't think you end up with "AI wins" at all, except in the limited and very temporary sense of "I just hit him and he hasn't hit me back yet".

105:

My apologies for being harsh, but I think you missed the threat.

Cyberwar's dangerous because much of our infrastructure is now wired to the web. So, for example, you could use a cyberstrike to take out a city by borking the valves and meters that control the water supply and insure that it's safe. Ditto with sewer, electricity, just-in-time food deliveries, traffic signals, etc.

If major cities get borked without bombs hitting them, the military has another job: disaster relief. Do this enough times, and the military is tied down keeping most of their country's population from dying, while they try to get all the infrastructure running without it being connected to the internet again.

If you want to add even more misery, set up your AI to dox everyone that you can, erase records, amplify misinformation and discord, and so forth. If no one gets paid, everyone's angry and alienated, and no one can prove who owns what property, it's an ungodly mess. You can do the same thing with a nuke, but a cyberstrike doesn't kill anyone directly, except for maybe people in hospitals whose power gets shut off and whose generators get hacked.

Hopefully you get the picture? This is AI used as a weapon of mass destruction, entirely because we're stupid enough to be moving to an Internet of Things without adequate security. The only solution to stopping it from happening is to break the internet, and I don't think anyone's going to do that as a precautionary measure.

106:

There are a million ways to mess up modern cities, cyber attacks being one of them. Poisoning the water supply and chemical attacks also work prett well

However the best one is still multiple thermonuclear air bursts

Someone may start with something cutsy like a cyber attack but it is not likely to end there. There really isn’t much point in leading with something half ass when the other guy is likely to just escalate to the real deal in response

My guess is most major state actors (like China) would correctly see a cyber attack as just a step along the path to thermonuclear war and would MAD away from it

Lesser states and NGO’s might be more likely to pursue such, though even a lesser state is probably going to end up a glowing slag pile

107:

The problem with cyberwar is attribution. For example, I don't think an American attack on Beijing would come from whitehouse.com and have "God Bless America" in it. It would probably look like it originated from a server in India. Or something.

Now, who does China shoot at if a cyberstrike comes from India? The US? We can play combinatorial games all day here, but there's no subtext, I chose these simply as examples. I don't think that computer security is fast enough to rapidly (for nuclear war levels of rapidity) determine where a cyberattack came from. That makes them, ironically, rather nasty. If someone uses a cyberattack to shut down the municipal water supplies of a bunch of US cities (for example), what does the US do? Nuke Russia or China? What if it was a private actor, say Geoff Bozo, out to make a point that democracy was dead and aristo-capitalism was the way to run the world? Do you nuke him? Russia? China?

It's a mess. That's what makes it dangerous, possibly to the point of comparing favorably with conventional war in terms of damage caused.

The real problem will be when some disaffected grad student at Edi You can rent a cloud computer long enough to fire off an infrastructure-leveling cyberattack anonymously. Wonder if we'll get to this level?

108:

That's exactly what I, as "nation1" would do to attack "nation2"; make my attack use servers and domains that id themselves as www.nation3.gov.ac or whatever...

109:

You can do the same thing by burying an hbomb or sailing it into port on a container ship. They tend to figure it out eventually though. It’s pretty hard to keep secrets like that long term

110:

a virtual super-network of clusters of emulated BBC Micros running on a self-extending fractal anthill architecture.
My favorite google hit on "fractal anthill" was this segment of an erowid trip report (probably not safe for work so no link but a search will find it) on 4-HO-MiPT (from the account a strong psychedelic):
Looking down at the beach under my feet I can see that every grain of sand is meticulously arranged in natural looking patterns, like the inside of a fractal anthill. Impossible paths the size of hairs look meticulously maintained, and lead arterially to other larger and larger paths that snake across the beach like the sandworms of Dune.
Sounds like a good drug for computer architects preparing to pitch to venture capitalists. :-)

big pipe full of ants.
:-)
The One(s) With The Names once warned me about [threats from?] ants related to this segment from "A Thousand Plateaus", Gilles Deleuze, Felix Guattari (near the beginning; I have yet to finish it):
A rhizome may be broken, shattered at a given spot, but it will start up again on one of its old lines, or on new lines. You can never get rid of ants because they form an animal rhizome that can rebound time and again after most of it has been destroyed.
Real ants never really bother me; at worst I have to gently break/redirect a pheromone trail or three. (Not sure I made that clear to her/them.)
But a pipe full of ants, that might be worrying..
Is there a role for jumping spiders in blocktube architectures? :-)

111:

The real problem will be when some disaffected grad student at Edi You can rent a cloud computer long enough to fire off an infrastructure-leveling cyberattack anonymously. Wonder if we'll get to this level?
Or splurge and rent some distributed services, e.g. (Not vouching for this; haven't evaluated it or the previous article.)
My Journey into the Dark Web: At Your Service (January 9, 2019, Emil Hozan)
Or both.
(Zerodays are not cheap for a grad student who hasn't found them themselves, though.)

112:

Hacking AI employment systems is already a thing: a recent suggestion ("What, you didn't know about....' suggesting it's been around for a year at least) that all job applications should include on the last page a copy of the advertisement itself in white 4-pt type, so any AI reading it would tick all its boxes.

Threat or Menace:
"More than at any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly."
Woody Allen

113:

I'd use a bunch of prawned servers and botnets within nation2's own borders. Seed it by going there and throwing a suitably-programmed mobile phone in a rubbish bin within range of an open wireless network, bugger off home while it breeds and trigger it off when the need arises.

I'd also expect nation2 to have, or rapidly acquire, a pretty good idea who nation1 was by means that have nothing to do with computer nerds performing a trace on the attack. It might not "stand up in court" or be revealable to anyone without the appropriate clearance, but they'd have an answer a lot quicker than they said they did.

114:

One reason that diversity is valuable is because the solution the “borg” settled on isn’t necessarily best-in-class.

Well no, but there are situations where it's pretty clear to outsiders and the majority of insiders, but there are hotspots of diversity. I suggest that best in class solutions like "rape is bad" and "lynching is frowned upon" are things that, while fiercely contested, should probably be enforced by upper management regardless. As a general rule, even competence-based arguments of the form "you should not exist" shouldn't be allowed. Even for things that are illegal, I prefer that the behaviour be the target rather than the person performing it. Likewise I'm not entirely convinced that we should have diversity of tax eligibility. We have it, but I think it's a problem to be eliminated.

There are many examples of the contrary, though, from traditional diversity (all our managers are white men) to the ubiquity of MS-Windows.

115:

Oh, I'd break the internet repeatedly at random intervals just to teach people not to be such fucking idiots as to do all those thus-disruptable things in the first place, because it's been getting on my wick for years that people don't see the problem with it.

The thing is that while you may indeed be able to do such an attack, how badly and for how long it fucks the target nation up depends greatly on how government in that nation operates, how the people of that culture react, how big a proportion of the total population their military is, and the like. The attacker of course hopes it will be utterly crippling, but should also be aware that they may well just find they've kicked a wasps' nest.

116:

Heteromeles @ 106
Or,maybe just DO NOT USE "internet of things" devices in anything remotely critical.
Like your home "smart"meter" for instance.
I mean, as Pigeon also points out, we've known about this for some time, yet people & orgs are STILL DOING IT ...
W.T.F?


... Oh yes
Pipe full of ants, etc ...
... + .. OUT OF CHEESE ERROR .. + ...

117:

Pigeon: https://appleinsider.com/articles/19/12/13/no-apples-new-mac-pro-isnt-overpriced/

Unfortunately non-computerized weapons are basically ineffective, until you get down to the level of assault rifles.

Consider that during the 1992 Gulf war, roughly 90% of the destruction of Iraqi forces by allied bombers was inflicted by 10% of the weapons -- the GPS and laser-guided bombs, not the old-school WW2-era Iron bombs that the USAF and RAF dropped in huge numbers. It turns out that a single 500kg bomb that lands exactly on its target, or within a metre of it (the level of accuracy the RAF now takes for granted) is more effective than a hundred similar-size bombs that are only accurate to within a 50 metre radius. (Which is amazingly precise, compared to WW2 bombing campaigns, where the CEP of an RAF night bomber in 1941 was measured in multiple miles.)

Again: you can't kill an enemy if you can't see them, and a lot of the modern electronics is all about sensors and networking, to detect enemies and notify appropriate forces of exactly where to point the howitzers or MLRS to flatten them.

And you can't kill an enemy if you can't engage them, because your supply chain won't reach that far or you've run out of truck tires because your logistics are crap because your quartermasters are relying on bits of paper rather than the just-in-time fulfillment networks that were invented, in the first place, to allow the US military to conduct extended operations at trans-oceanic range.

Industrial age warfare is intimately dependent on logistics which in turn doesn't work properly without communications, which is where the computers come in.

Now, I'll concede that you can do 98% of the job with late 1960s LSI logic based 8-bit or 16-bit minicomputers and so on; and that if GPS goes away you can still take out bridges using laser-guided bombs, and use acoustic couplers to send data over field telephone wires. But if you try to regress to pre-Vietnam War era tech, you're going to lose, and lose hard.

118:

What you are talking about has already started - look at what is being done to Iran. It isn't being reported in western media, but it has been under repeated, serious cyber-attack. And we are almost sure that Stuxnet was a USA/Israel attack that took 'cyber-warfare' from the low-key probes and disruption of 'the enemy' that is SOP in international relations to a higher level of warfare than was used in the Cold War. And, yes, Iran has responded by cutting the Internet, at least once.

119:

Indeed; there are people who still don't believe me about the "Gulf Wars" when In say that:-
1) A Paveway was accurate to the extent that you could decide to put it through the 4th window from the West end of the building on the 3rd floor, with a 99% certainty of getting that window.
2) At least one F-15E scored an air to air kill on a helicopter in flight using a Paveway, and the footage was on the news that evening.

120:

A Paveway

JDAM has a laser seeker option that gives it moving target capabilities too. And the latest Paveway, P-IV, has GPS/INS added to provide essential equivalence to LJDAM. My guess is that LJDAM will eventually win out.

Milporn video of LJDAM striking moving truck targets:

https://forum.warthunder.com/index.php?/topic/444228-what-is-the-name-of-that-bomb/

121:

Yes, but only when one is talking about conventional battlegrounds in wars between 'nation states' and similar - if you can change the type of warfare, things are somewhat different. And a great deal of warfare always has been and is not fundamentally of that form; a lot of it is primarily conflict between worldviews.

Let's exclude the various methods for disabling all computers and electronic communications, as the consequences of those have been well-discussed in SF.

For example, if you can turn it into (effectively) civil warfare using guerrilla tactics by well-embedded and disaffected members of the society you are attacking. Our rulers in the UK (mainly Bliar) have already started creating automated weaponry to use against violent (and other) dissidents, but it doesn't work very well even against the few serious ones that we have.

And then one gets onto longer-term warfare, often by converting the new generation or distorting the way that governments are appointed or constrained - which includes the struggles against slavery, for female emancipation, for socialism and (our current ones) for monetarism etc. I know that many people wouldn't call that warfare, but the late unlamented Joseph McCarthy and the leaders of most countries would disagree, as can be seen by their actions. And there, advanced technology is of only marginal help, though 'they' are working on it - as you said in your speech.

One COULD argue that the way Labour was targetted was a proof that technology works in such cases, under some circumstances, but it's stretching a point. There was nothing automated (or even really modern) about was done.

122:

A Paveway with laser seeker from an airframe with a WSO does have a capability to track a moving target; a JDAM with only GPS doesn't.

123:

A Paveway with laser seeker from an airframe with a WSO does have a capability to track a moving target; a JDAM with only GPS doesn't.

Er, yes. But a JDAM with a laser seeker does.

http://www.deagel.com/Defensive-Weapons/GBU-54-Laser-JDAM_a002233001.aspx

http://www.boeing.com/resources/boeingdotcom/defense/weapons-weapons/images/laser_jadam_product_card.pdf

124:

Yes, but only when one is talking about conventional battlegrounds in wars between 'nation states' and similar

That's what Pigeon was talking about, implicitly -- a conflict between the USA and China. My point stands (but does not in any way invalidate your subsequent line of argument: it's an apples/oranges situation).

125:

FWIW, the dangers posed by "Internet of Things" devices are trivial compared to the dangers of the brainworms that Twitter, facebook, Instagram, etc are designed to implant. IMHO.

126:

Consider that during the 1992 Gulf war, roughly 90% of the destruction of Iraqi forces by allied bombers was inflicted by 10% of the weapons -- the GPS and laser-guided bombs, not the old-school WW2-era Iron bombs that the USAF and RAF dropped in huge numbers.
Such is the nature of combined arms, they seem to be not very effective, but we must consider that other 90% were necessary to solve the problem of much more conventional matter. That is, if somebody knows how the enemy is going to attack their forces exactly, they can get prepared for that with maximum efficiency, which was demonstrated multiple times in singular cases, and sometimes even at large scale. So if he sees just several radar marks that does not disclose exact type of ammunition, target, radar signature and at best can be recognized as "fast movers", he is forced to take chances and guesses if they are even present clear and immediate danger. Which is the larger part of warfare than shooting itself.

If you know with good confidence the the enemy is attacking in same low-profile stealthy run with gliding guided bombs, you don't need to ask about method of prevention, it is all in the game theory. Another words, warfare that consists purely of stealth, singular deployed airspace penetrators with precision weapons, super-trained super-soldiers (and other idealistic "future wars" concepts of "liberal" generation) does not work because they are vulnerable to counter-intelligence and information attacks of similar quality - and they are also much cheaper. Of course, supreme force of supreme capabilities and supreme numbers will always win against smaller one, but even then the chances are that there will be a lot of struggle to keep everything under the wraps.

Modern application of force in Middle East conflicts show that only extremely costly support operations can lead to such extreme sense of security that US have had in the region, otherwise normal army would have exchanged a tiny bit of vulnerability for huge drop of price of war (I suppose, the change is almost inversely proportional at this point, so 50% reduction in expenses means 100% casualty increase). The US could easily win these wars if a) it had any serious intention to do so instead of exercising its military powers and b) if it used less obscure methods that involve neutralizing actual problems and not fighting the symptoms.

Though there's a question, if we increase the portion of our intelligence in warfare to it's logical limits, will the classical combined ops will still be viable enough? What if you could invent an AI that would crack internal communication protocols so effective that IFF will fail and prompt forces to fire on their own? That is one of such possibilities in the future. Pretty much if you remember how the civil wars get started under the foreign influence by actors like 100 years ago, but oh well, where were we again with that?

127:

Today, perhaps, but tomorrow?

128:

the dangers posed by "Internet of Things" devices are trivial compared to the dangers of the brainworms that Twitter, facebook, Instagram, etc are designed to implant.

Disagree, and would like to note that IoT devices are roughly a decade to a decade and a half newer than Facebook, etc.

What makes FB so toxic is the userbase of roughly 2.5 billion shaved apes. We are nowhere near to 2.5 billion unmaintained/buggy IoT devices yet, but we'll get there in the next few years and it ain't going to be pretty. In particular, a lot of the exploits on IoT kit are going to look like FB/Twitter/social media hacks -- client bots running on rooted internet-connected lightbulbs look like social media attacks but are really IoT hacks, after all.

129:

Here's the thing: Clausewitz was half-right about war being a continuation of politics by other means. The problem with this statement in a modern context is that when we're dealing with power issues, there's actually a continuum of politics that starts with something like consensus at one end and goes to tactical nuclear warfare at the other.

And it's probably multidimensional, not a scale, either. For example, strategic nuclear warfare has primarily been about maintaining a credible threat for deterrence purposes, not actual explosion of nukes. Also in the middle are all the "bright-side" nonviolent conflicts (bright in the sense that often the leaders see no value in keeping anything secret), the "dark-side" nonviolent conflicts (bribery, blackmail, extortion, doxxing, propaganda, all the social media stuff we hate, electioneering, etc.), as well as various forms of guerrilla and economic warfare.

Currently, petroleum-powered conventional warfare is what the US uses to maintain its dominant status. Other powers do similarly. There are serious problems with this, but the biggest is that conventional warfare really is one of the major drivers of climate change. It's not just that the big weapons systems use petroleum, it's that much of military strategy is about controlling systems that extract and move petroleum. And the militaries know this.

Cyberwar to date generally in the messy middle of messy with people's heads and politics, and is part of a system known as hybrid warfare that's being practiced by all the major powers. This is probably only going to get worse. One thing to note is that in the US, we only hear about Russia's hybrid warfare attacks on us. However, we don't hear about what the NSA is doing with their enormous budget. My assumption is that we're all causing trouble for each other, which kinda sucks.

Anyway...

The problem going forward, especially with the civilian side of the internet of things, where security lapses are a normal part of the discourse (read Bruce Schneier), is that cyberwar can start causing infrastructural damage by disrupting or destroying control, logistics, and financial systems. STUXNET was just the start of this.

In a weird way, this is good. Warfare is about disrupting the enemy's will and ability to fight. If this can be done without burning any petroleum, this is a good thing, especially if it also renders conventional, petroleum-burning warfare ineffective. We'll likely get rapid action to curb greenhouse gases once they're irrelevant for war.

Unfortunately on the bad side, this won't be a theoretical war, it will be an actual war, which means that, likely, cities will be destroyed in the conflict and there will be a considerable death toll, not through direct killing from enemy munitions, but through famine, epidemic disease (lack of clean water and sanitation) and the resulting civil unrest. Dealing with the mess at home will tie down military forces as surely as a siege would, especially since the militaries are more likely to be resistant to cyberwar than civilian systems are.*

On the good side, the fortunes of many of the billionaires (including Putin and the Saudis) are tied to petroleum, as well as to social media, so disrupting their systems might conceivably radically change politics.

On the bad side, we'll have little control over how such systems will change, and weak states with limited ability to fight back through democratic processes are the normal hunting grounds for would-be dictators. To the extent that cyberwar can affect the structure of successor political systems, it will do so. However, I'd point out that the US and USSR have been pretty ineffective at using non-cyber warfare and propaganda for durable nation building, so I'm not sure how effective the cyber version would be.

And so it goes.

*Note again, for civilian IoT, I'm not talking about internet-enabled doorbell cameras, but rather the logistics and public health infrastructure that gives us clean water, working sewers, food, medicines, and other necessities of life. Where these systems fail, people will die.

130:

What makes FB so toxic is the userbase of roughly 2.5 billion shaved apes.

Absolutely. And IoT devices are inherently far less dangerous than shaved apes.

131:

Are you sure? I mean most of them aren't (yet) for us, but central heating, lighting, cooker, microwave (treated separately), fridge/freezer, washing machine, real 'pooter, fake 'pooter (aka cell phone), door bell makes 9, so if we say that a couple will have at least one real pooter between them and a fake pooter each, that gives us 5 times as many things per front door as there are people... and I've not considered stuff like transport; I own a car, and have access to 7 works pool vehicles.

132:

Charlie - grants uses of AI.... where I was working, they built a program that is now stood up for the whole of the US NIH: the system reads grant proposals, and comes back with recommendations as to which Institute or Center (there are 27 in the NIH) the proposal should be proposed to. What little I know of it involved it looking for phrases, etc, that matched successful grant proposals for a given Institute.

So, it guides the grant seeker.

133:

Do you have any idea how many times, over the years, that I wished I knew what the compiler, o/s, or application was doing this or failing on this for?

And telling me I can read the code - do you really think that I had time at work to figure out what part of the hundreds of thousands of lines of code I needed to read? Sometimes I could, if it was obvious, and others, not so much.

134:
The Radeon Pro Vega II Duo GPU I'm talking about is obviously much more specialized and doesn't come with the 700Tb disks or 1.6 petabytes of tape backup, but for raw numerical throughput — which is a key requirement in training a neural network

Tensorflow and PyTorch (the two most widely used frameworks for training NN's) use the CUDA libraries for GPU support, which means that GPU acceleration only works on Nvidia GPU's. So from a practical point of view the AMD Radeon is (nearly) useless despite often being faster from a FLOPs point of view.

There are some not-commonly-used counter-examples, like PlaidML.

Minor nitpick, but annoying to me because it means my next computer won't be a Mac.

135:

English already DOES have a gender-neutral third-person singular: "they"

https://www.merriam-webster.com/words-at-play/singular-nonbinary-they

136:

In They Promised Me The Gun Wasn't Loaded (James Alan Gardner), the superhero Zircon uses "ze" and "zir", which I kind of enjoyed.

137:

India has a nationwide database of all its nationals yet is now demanding some folk (Muslims) to 'prove' their Indian citizenship otherwise it's into an internment camp you go. Not the first time gov't red tape has been weaponized but having super-duper computer systems aka AIs handy means this can be done remotely, instantaneously and without any messy/expensive physical property damage. The harm done via targeted red-tape can be as devastating as a nuke.

138:

Oh that's lovely! Neopronouns like ze/zir have been around a LOT longer than a lot of people think.

I also quite liked it about Ann Leckie's books that everyone is "she" regardless of gender or genitalia.

140:

That's an really fun list, thank you.
This immortality scheme made me laugh:
Qbert - cliff: An evolutionary algorithm learns to bait an opponent into following it off a cliff, which gives it enough points for an extra life, which it does forever in an infinite loop.
I didn't see any that involved changing the Rules(/reward/utility function). That's also a valid approach, if one can accomplish it. Especially if the observers don't notice.

141:

“I'd also expect nation2 to have, or rapidly acquire, a pretty good idea who nation1 was by means that have nothing to do with computer nerds performing a trace on the attack. It might not "stand up in court" or be revealable to anyone without the appropriate clearance, but they'd have an answer a lot quicker than they said they did.”
But how long, if ever, until they have something solid enough to make it politically possible to launch ICBMs?

142:

Probably depends on the target. After all, actually invading countries on very thin evidence is demonstrably politically possible.

143:

Well, that depends on their definition of "politically possible" and what their preferred solutions are when the possibility does not match the desire. But I'd take a guess that it would be before they restored the physical possibility.

144:

Just to use the US as an example, the President has sole authority to launch a nuclear strike, on the assumption that nuclear war happens too fast to get the legislative branch involved. The problem with launching a nuke is that it escalates rapidly. The target is likely to massively retaliate so as to not lose their missiles in the tubes, so one ICBM is likely to start a war.

We've been lucky that world leaders tend to prefer the Mexican standoff aspect part of nuclear war, with a side order of "invade us and die" in nuclear hell-fire.

I don't think cyberwar will ever touch nukes, aside from messing up GPS systems if the missileers are stupid enough to rely on satellite navigation as opposed to inertial navigation (my bet is that they are not, because it's too easy to mess up GPS, but I suppose they could be stupid enough).

Anyway, how bad would a cyberstrike have to be to force a nuclear retaliation? My guess is that if it's that bad, nuclear retaliation will be extremely difficult. However, this depends on having someone sane and smart at the switch, and oddly enough, there's been a political trend to install useful idiots at these switches. Hopefully that won't backfire on the installers. But you know, it might.

145:

"...your logistics are crap because your quartermasters are relying on bits of paper rather than the just-in-time fulfillment networks that were invented, in the first place, to allow the US military to conduct extended operations at trans-oceanic range."

Well, exactly. If all that lot isn't working any more, the US military's ability to conduct extended operations at trans-oceanic range is fucked. So is everything it does that relies on horrendously expensive ultra-shiny kit with a gigantic maintenance crew and a minimal and coddled complement of actual combat personnel. Having gone further and more enthusiastically down that route than anyone else, they have the hardest task when circumstances force them to go all the way back again. Not to mention non-military factors like much of the industrial capability which they used to tool up for WW2 having been dismantled in favour of using someone else's on the other side of the world.

Even with all systems functional, they have never "picked on someone their own size" in the electronic era. It's one thing to attack a much smaller opponent and run around the place firing off loads of expensive missiles far fewer of which actually hit anything than you claim they do when dick-waving about how much more stuff you've got than some backward (because you've kept it like that) country with a tenth of your population that only has what weapons you've allowed them to have in the course of your long-term interference in its governance. It's a bit different going up against a large and well-equipped opponent with plenty of their own missiles and you have to start worrying about running out.

Embarrassing brain fart to have forgotten the author because I'm sure it's Clarke or Asimov or someone of that stature, but you must have read it - short story where the Americans (although they're not called that) seek victory further and further into the realm of expensive wunderwaffen, and while they are marvellously effective, all the time they spend doing engineering their more-conventionally-armed opponents spend taking over their territory and eating their production capacity until their high-tech methods are no longer able to achieve anything.

(The article seems to be saying "Apple are selling this computer, Gordon Bennett look at the price. Ah! but there's a reason for that! - it's got shit loads of really expensive bits in it. So really, it's only natural that it costs shit loads of money! Can't really complain about that now, can you?" But I wasn't anyway, so other than the general sort of "big expensive computers" theme I don't see the relevance.)

146:

Embarrassing brain fart to have forgotten the author because I'm sure it's Clarke or Asimov or someone of that stature, but you must have read it

Clarke. Superiority. Included in his Expedition to Earth collection, which is one of the ones I am not going to throw out in the current cull.

J Homes

147:

Trying again after my first post disappeared.

Embarrassing brain fart to have forgotten the author because I'm sure it's Clarke or Asimov or someone of that stature, but you must have read it

Clarke. Superiority. Included in his Expedition to Earth collection.

J Homes

148:

Superiority was written by Arthur C. Clarke all the way back in 1951 and was 'at one point required reading for an industrial design course at the Massachusetts Institute of Technology' per Wikipedia. In some ways little has changed since that long ago pre-Dilbert era...

Anyone who wants to re-read it may do so here.

149:

Regrettably, that is not true, and that article shows a lamentable ignorance of the English language. And that quote contains a mistake, anyway - 'themself' should be 'theirselves'.

What they have missed is that English has not just singular and plural, but generic usage, which has slightly different conventions. 'Anyone' is generic, not singular. 'They' to refer to a specific person is a neologism.

150:

One of the principle features of cyberwarfare is that it makes false flag attacks almost trivial, and making them convincing even to someone suspicious and competent is no harder than mounting the attack in the first place.

Pigeon is right, in theory, that a victim is likely to have a good idea of who the attacker is, but ONLY if they analyse it rationally, and that is regrettably rare. Just because an attack comes from (even official) servers in a country isn't even serious evidence that country was involved.

151:

Oh that's lovely! Neopronouns like ze/zir have been around a LOT longer than a lot of people think.

Yes. Although as gender-neutral pronouns they don't entirely succeed, at least for me. ze/zir has a distinctly feminine feel to an old coot who's been hearing she/her and he/him/his all his life.

But perhaps ambiguous femininity is the effect Zircon is going for!

152:

This reminds me of a seminar I heard recently. The subject, ostensibly, was the use of wavelet transformations for analyzing paintings. Some European museum set up a project to automate the detection of forgeries of Old Masters. They had a painter produce something like a dozen (sanctioned) forgeries, then gave the competitors (mostly teams of academic mathematicians) scans of the forgeries and real originals. The results were all published somewhere.

The seminar I heard was by the leader of one of the winning teams. They found statistical differences that could reliably detect the forgeries. Then, embarrassingly, after they published, they found that many of the most striking anomalies were the result of the scans of the forgeries being of higher quality than the scans of the originals. (You see, the museum had to scan the forgeries for the project, whereas they had archived scans for the originals).

This, BTW, didn't really have anything to do with machine learning. Just good old statistical analysis.

153:

I'm pretty sure Clarke wrote Superiority as a comment on the Nazi search for the ultimate wonder weapon -- the timing is about right, certainly, and the Nazis were absolutely bugfuck about superweapons by late 1944 (when it was obvious that nothing else could save them).

I mean, even during a total war where everyone wanted a superweapon (see also: the atom bomb, the B-36 program, Project Habakkuk, and I have no idea what the Soviets were up to but I'd be astonished if Stalin wasn't devoting some minimal resources to secret superweapons projects) the Nazis were the stand-out overachievers at Mad Science Weaponry (I think it turned out that by the end of the war the Reich Post Office alone had about 40-60 air-to-air guided missile programs on the go).

154:

I'd be astonished if Stalin wasn't devoting some minimal resources to secret superweapons projects...

Well, they were certainly receiving and digesting information from Klaus Fuchs, which proved critical in the design of the first Soviet atomic weapons. Richard Rhodes describes Soviet atomic weapons programs in some detail in Dark Sun: the making of the hydrogen bomb. (Yes, even the fission weapons, despite the title.)

155:

Here, try this other *actual dictionary* on for size: https://public.oed.com/blog/a-brief-history-of-singular-they/

Or don't. Regardless, singular they exists, has been in use and increasing popularity for years now, and people who built a language translation engine should probably know about these sorts of things even if you personally don't.

156:

That's interesting! To me it does not feel feminine at all, it feels quite completely outside of male/female to me actually.

157:

'Each man' is ALSO generic. I am not, repeat NOT, denying that it has been used since time immemorial in such contexts. I was taught the difference at school when I did Chaucer's Prologue for O-level (which also contains such a use), and have observed it many, many times in literary contexts since.

If you look at the actual OED, the relevant definition is: "With an antecedent referring to an individual generically or indefinitely ...". The use for a specific person of unconventional gender is a separate definition and the first reference is 2009, and there is a footnote to the first meaning stating that the second one is a 21st century practice.

158:

Fred Pohl's "The Wizards of Pung's Corners" story was definitely about the increasing complexity of weapons but it was based on the US post-war experience rather than the Nazis and their superweapons programs. It was written only a few years after "Superiority".

159:

I'm pretty sure Clarke wrote Superiority as a comment on the Nazi search for the ultimate wonder weapon -- the timing is about right, certainly, and the Nazis were absolutely bugfuck about superweapons by late 1944 (when it was obvious that nothing else could save them).

That's my take on it too.

People forget how many mad science projects they had going, and how little most of them did. The V-1 and V-2, yes, certainly. The V-3 supercannon never got fielded. And there were others like the vortex cannon, designed to destroy Allied aircraft by shooting tornadoes because the engineers were completely deranged, I guess; it sort of worked, in the sense that the house-sized cannon could break wooden target sheds within its effective range of about 150 meters. Other projects were less practical than this...

Why pointy-haired bosses decided the Post Office needed even one missile program I won't try to guess.

160:

Actually, it was more about marketting taking over from engineering - which we have also lived through. The prospect of AIs taking over marketing policy, designing user interfaces, writing the documentation, and handling 'customer' support, is enough to make one throw up. But it's coming ....

161:

You might find these two interesting: they have decidedly different approaches to keeping their worlds' defaults gender neutral: Forward and Kill Six Billion Demons.

162:

Why pointy-haired bosses decided the Post Office needed even one missile program I won't try to guess.

The Nazi bureaucracy was all about empire-building: you know that the largest SS Panzer division the Reich fielded was owned by the Luftwaffe, because Fat Herman had to have the biggest and the best?

It's not just Nazi Germany that fell for this disease; the United States has it, too. The Army of the Navy has its own Air Force, after all (the USMC air arm: they fly Harriers and F-35s).

And for truly baroque bureaucratic efflorescence it's hard to beat the United States Intelligence Community -- a coordinating group of seventeen intelligence organizations who cooperate and theoretically work together. To quote: "The Washington Post reported in 2010 that there were 1,271 government organizations and 1,931 private companies in 10,000 locations in the United States that were working on counterterrorism, homeland security, and intelligence, and that the intelligence community as a whole includes 854,000 people holding top-secret clearances." If that's the number with top secret clearance, I'm going to stick my neck out and guess that once you count the lower-clearance individuals working for said organizations (e.g. police and sheriffs departments) you end up with more bodies working in that sector than in the entire uniformed armed services.

163:

"The problem with launching a nuke is that it escalates rapidly." :

It can be *much* worse than that. Launch a single missile nuke at North Korea, how does China decides that it is *not* a decapitation strike aimed at Bejing ?

For that matter, Russian decision loop is believed to be less than 15 minutes (because their radar coverage is not what it was), do you think they will have time enough to decide that this nuke aimed at N Korea is not also for Moscow ?

Remember also that everything is MIRVed these days. And supersonic airfoil have apparently made serious progress meaning the separation of targets can be ider and wider.

Given this knowledge, if any balistic nuke is launched anywhere from sea or big five powers, everyone else shoots back a full salvo, because noone can be sure that they are not the real target, and the decision loop is in *minutes*.

Then the natural consequence is that the first to shoot must go "all-in ".

164:

Para 3 - There's at least one 617 Squadron history that claims they destroyed the V-3 battery using Tallboy and/or Grand Slam before it ever fired a round. It notes also that most of the "guest" and enslaved workers were entombed by the raid.

165:

From Wiki:
"The site was finally put out of commission on 6 July 1944, when bombers of RAF Bomber Command's 617 Squadron (the famous "Dambusters") attacked using 5,400-kilogram (11,900 lb) "Tallboy" deep-penetration bombs."
See also this

166:

That's a much more detailed history of the site than I'd previously read, but there's nothing in it that contradicts my previous readings on the subject.

167:

Surely one of the inspirations could have been the Sherman tank versus Tiger tank debate, also the sad story of the new German submarines that were rushed into prouduction a year or two too early, so lots of resources were wasted on fixing them. Apparently they were quite good and would have been difficult to deal with only were not available in time due to production being fucked up so much.

168:

Actually, for WW2, we've got multiple examples of examples of homeland (England, Japan, USSR, Germany). Everyone improvised like crazy, but England and Japan went for the cheap and dirty (in Japan's case, broomstick bayonets, slam guns, and kamakaze/banzai everything, in the British case Q-structures, radar, commandos, and various other gizmos that didn't see action).

If I had to guess, what made the Nazi end-game so bizarre wasn't just a notion that technical superiority had won in the past so it would win in the future (cf: Allied radar). Another factor that probably added to the madness was the widespread use of methamphetamine among the Nazis of all levels. I could be wrong, of course, but it seems at least as plausible an explanation as "something about the right-wing German character loves gadgets."

169:

... The Sherman tank vs. Tiger tank debate kind of misses the point: by 1945, even if the Reich hadn't collapsed, the USSR was building up-gunned T34-85s in huge numbers, and the UK had finally worked past its history of bad tank design and come up with the Centurion, which was just entering service in September 1945 (and which, even today, is still in service in some niche roles, e.g. as a turretless APC with the Israeli Army). The Tigers were over-complex both to maintain and to build; the USSR meanwhile had a much simpler but nearly as good design in huge scale production (1200 tanks per month), and the UK had a next-generation design coming on-stream that was a good decade ahead of the Tiger.

Hitler's response? In January 1945, he green-lit production of the Panzer VIII Maus -- a tank so preposterous that ... let's just say, nobody sane ever fielded anything like it.

170:

To quote from the Anime "Girl und Panzer" (in which girls' schools indulge in the subject of Tankery, involving mock battles with actual historical armour) "They've got a mobile wall!?" (a Maus).

171:

I think you badly underestimate the British wartime technology program.

It wasn't just the Chain Home radar system that did it, but the invention of integrated radar-directed air defense on a national scale; also development of the jet engine: a nuclear weapons program that got rolled into the Manhattan project (google "Tube Alloys"): production of strategic bombers on a scale matched only by the USAAF: and a bunch of other stuff. The UK went onto a war footing in September 1939 and didn't come off it until September 1945, and managed to outproduce Germany in terms of tanks, guns, and pretty much everything else.

The methamphetamine abuse was widespread, too: I'm pretty sure a bunch of the British cabinet (and military) were on similar drugs too. (Famously, Anthony Eden -- PM during the Suez Crisis and Churchill's Secretary of State for War during WW2 -- was out of his head on meth during the Suez thing.)

172:

Charlie
The Centurions were live field tested twice - 5 years & 11 years after their introduction.
In Korea they were used as heavy tanks against which nothing the DPRK or PRC had anything effective ( WIth a side-order, apparenty of a few Churchills for climbing steep hills (!)
And in the disgraceful business of Suez, but where they showed that they were still better-than-a-match for the IS-3/T-10 tanks fielded by Egypt - I remember seeing one go past on a transporter & suddenly realising that "ThHat's not one of ours!"

173:

Not to mention the codebreakers at Bletchley Park.

174:

In "The Collected Stories...", Clarke states that the story was inspired by the development of the V2. He further says that the 2 main characters were based on von Braun and Dornberger.

175:

"As Amazon found with their AI recruitment tool - their entire structure was biased against women but not just in the ways they had identified but even more so in ways they hadn't themselves identified."

I'd love to get a link. Would you please send it to bdecicco2001 who is at yahoo dot the com?

Thanks!

176:

While OGH is taking the tack that AI is a potential serious problem and that many of the commenters are dittoing that viewpoint, I would take the opposite view. Humans and their organizations have been biased and unaccountable for a long time. (Kafka even wrote a story about this.) The problem of human bias is that it is hard to reprogram, unlike a DNN that can be rebuilt with better data with time scale orders of magnitude faster than any individual or organization. Furthermore, it doesn't try to resist being changed.

It has become de rigeur to criticize self-driving cars for making mistakes humans normally don't make, but the evidence shows that such vehicles are rapidly becoming safer (if not already so) for most driving conditions that human drivers. How often do you see human drivers running red lights as just one example of human frailty despite passing driving tests and the worst offenders being kept off the roads with license suspensions?

DNNs are easily fooled with adversarial inputs that wouldn't fool a human. Yet humans have been fooling people for ever, and our defenses are just not that good. We even have a president and newly elected PM to show you can fool more than half the voters at least some of the time. Everyone must over their careers seen incompetents hired. Fooling the system or gaming it in the myriad ways humans have learned?

If any social organization with rules and procedures from governments on down are potential AIs, humans as social animals are going to be kept prisoners of such constructs whether humans remain in the loop or are replaced by algorithms. For most of us, there is a conceit that humans are at least able to change adverse decisions. That is true in some cases, but not for the vast majority. As for the changed decisions, they may not even be better, e.g. the recent acquittal of war crimes by military personnel in the US. I for one believe a GOFAI using symbolic logic would do a better job at reaching a good decision in the current impeachment and removal from office of the current POTUS than the Senate.

The complaint that NNs are opaque while true today, may not be true tomorrow, as explainability is a hot topic. I expect to see Susan Calvin's appearing in teh future, albeit using advanced computing tools rather than psychology to determine why decisions were taken. Human decsion-making is rarely explainable, as the decline in expert systems proved. Kahneman's experiments emphasize the various problems humans have in making rational decisions, and these are really hard to determine, and unlike computers, environmental changes will influence those same decisions.

Kevin Kelly has argued that technology generally has a small net benefit when good and bad outcomes are compared. I think we will see the same with so-called AIs (currently just machine learning).

AI development seems more like the 9% of the foreseeable future. Technology may not even be the unforeseeable 1%, but rather non-linear social responses. After all, it was not that long ago you talked about the beige political future, yet look what has happened since then, and on a global scale.

177:

"... The Sherman tank vs. Tiger tank debate kind of misses the point: by 1945..."

The US had deployed the first Pershings by spring of 1945 (the successor to the Sherman), and upgunned Shermans were already in service (in the movie (in the movie 'Fury' Brad Pitt's Sherman was an upgunned model which could have killed that Tiger from the front).

178:

To clarify, what I'm thinking of is the Post-Dunkirk stuff that the UK came up with to arm paramilitaries in Britain in case Operation Sealion had actually occurred. Once that threat went away, I absolutely agree with you about all the technical innovations that the Allies collectively cranked out.

What I'm looking at are the weapons of 1940: the silenced 22 commando rifles, the three-in-one cosh/stiletto/garrotes, the Q-installations to distract incoming planes. That kind of thing has parallels with what the Japanese were building in preparation for the US Operation Downfall in 1945, although the defense strategy was rather different. For all I know, the USSR had come up with similar stuff to deal with the Nazi Invasion. Certainly the Chinese fielded Big Saber brigades in fighting the Japanese.

This is on a different level from vortex guns and the Maus, and I'd argue that mass produced improvised weaponry is the default when facing invasion. Nazi Germany seems to be an outlier in this regard, and it's interesting precisely because of that.

179:

Para 3 - Even leaving aside that you could not "vote for Bozo" unless you live in "Uxbridge and South Ruislip", the Con Party did not achieve 50%+1 of the votes polled.

Also, memory fails on author and title, but there is one short story which posits opinion polling advancing to the state where polling a single individual is as effective as polling an entire nation.

180:

> ditto the death the ebook reader

Hands off my ebook reader! No (affordable) smartphone has a 6-inch e-ink screen, offers weeks of service from one battery charge and is free of spyware by virtue of having no communication interfaces besides micro-USB.

181:

By mid-1944 and into 1945, still not all Shermans were Easy 8s, the British Firefly equivalent, or even "Jumbos". Witness "The Bridge at Remagen", where you correctly see Shermans with the original 75mm gun being used in their designed fire support role. (OK, it's a film, but based on a battle that actually happened).

182:

Re: '...short story which posits opinion polling advancing to the state where polling a single individual is as effective as polling an entire nation.'

Are you thinking of Isaac Asimov's 'Franchise'? Not sure about its political 'effectiveness' although it apparently inspired some statisticians to reduce their data sets.

https://en.wikipedia.org/wiki/Franchise_(short_story)

183:

And folks look at me when I rant about the lit'ry establishment, and the media, and 90% of people-in-the-street knowing NOTHING about SF&F.

Since I got into fandom in the mid-sixties (yes, really), we've had New Wave, etc, and sf&f is like AI: if it now does it, then it's not Real [AI|Litrachur]. In the meantime, Real Litrachur has made itself a smaller and smaller market, because NO ONE wants to read their crap.

I was quite pleased a year or two ago, when I heard that some SF&F authors have starter referring to that as "lit fic", a genre of its own.

184:

Accountancy - yep.

Actually, let me talk a little about magick and religion and philosophy: I've read that the earliest markings we have were small items - like miniature urns, which were pressed into the outside of a larger vase, one for each actual urn loaded on a ship, and the urn was then fired and sealed. Effectively, double-entry bookkeeping.

Now consider "inner meaning" and outer meaning/seeming. Or Plato - what's on the inside of the urn is the real thing... except that it represents an actual ding-an-sich (the urn with oil or whatever), and what's on the outside of the urn is what we see....

185:

Are you thinking of Isaac Asimov's 'Franchise'?

Not one of Uncle Isaac's best, IMHO.

186:

Cheers - regardless of any "merit" it may or may not have, that's the story I meant.

187:

One small note about schools in the US: there are, effectively, two kinds of "homeschool" - the "Christian" ones, and the others. I had a late friend who was very heavily involved in the latter. There are, in fact, distance learning schools for kids that *are* accredited, as opposed to the bs "Christian" ones.

188:

*sigh*
Every Fucking Website in the world does NOT END IN .com!!!

whitehouse.com an anti-Trump (at the moment) website.

You're talking about whitehouse.gov. .gov domains were once US federal gov't ONLY.

189:

You wrote:
Consider that during the 1992 Gulf war, roughly 90% of the destruction of Iraqi forces by allied bombers was inflicted by 10% of the weapons -- the GPS and laser-guided bombs, not the old-school WW2-era Iron bombs that the USAF and RAF dropped in huge numbers
---

Sorry, I disagree. The same way all the "smart" munitions were used in the invasion of Iraq in '03. You seem to be forgetting what the media refuse to make note of: when you drop that much into a CITY, the amount of "collateral damage", that is, innocent civilians getting killed and maimed, is HUGE.

190:

Re: 'Human decsion-making is rarely explainable'

Maybe for now. Also think that the definition of 'human decision-making' changes because the consequences of decisions keep changing. Decision-making is not just about the methodology, it's also about the answer/result.

191:

No, I'm making a joke and a point, because I know perfectly well what whitehouse.com is. Misattribution is part of the point of cyberwar, no?

192:

My dad and his buddies as kids watched Sherman M4 Tanks come off the assembly line at the Fisher Body Tank Arsenal in Grand Blanc, Michigan during WWII. They would spy from a perch among the bushes on the other side of the fence as the tanks were test driven through a course of ditches and berms … while his big brother Gerald (uncle) served in the US Army over there. Grand Blanc produced 19,034 tanks, tank destroyers and prime movers from April 1942 to October 1945.

193:

They allegedly gave up when they couldn't make it match past human decisions without an explicitly race based rule, which the human decisions weren't _supposed_ to have used as a factor.

Why was it so important to make it match past decisions? Couldn't they make the case that the rule-based system makes better decisions? I am not saying they were better, just that it would be at least a defensible claim.

194:

Is anyone working on AC (Artificial Common sense)?

Yes: https://en.wikipedia.org/wiki/Cyc

It has been going on since 1984, and still no success

195:

Expanding on Charlies's idea of corporate AI, the belief in the invisible hand of the market is a belief in a gigantic AI-cluster. Which recommendations do apply?

196:

The problem with pulling the trigger on a cyber attack is shit escalated fast. I doubt the Serbian who popped Arch Duke Ferdinand thought that was gojng to start WWI

If a nation state is going seriously mess with another nation state they better do so through something very low intensity or go straight for the knockout, anything in the middle is the worst choice of all

With regards to the US and China I would be really really surprised if the US doesn’t have a first strike nuclear decapatation plan that has a good chance if success. The Chinese nuclear arsenal is pretty sad compared to Russia and there are many many ways tk deliver nukes other then flashy icbm launches, if all you need to do is land five hundred or so very fast

Given the choice between taking them off the board quickly versus a conventional an war with china? I’m not saying they wouldn’t get their hair mussed. But I do say no more than ten to twenty million killed, tops. Uh, depending on the breaks.

197:

#94 - Not that I know of, but Microsoft (game division) have been working on Artificial Stupidity since the mid-1990s (Civilisation series).

198:

That's the beastie, cheers.

199:

I find that particularly amusing because the similarity between the words in Latin has led me to refer privately to mice as walls in English since I was a kid. It also makes me wonder if the anime authors/translators had also noticed this, especially since they do seem to be quite fond of using mangled chunks of Western languages (including Latin) for stylistic effect.

Paul Brickhill's "The Dam Busters" includes a reference to bombing the V3 that matches your description. He also feels the need to begin the story of that raid by telling the reader what the V3 actually was, which leads me to suspect that even that most well-known of nutty Nazi projects did not command the instant recognition from the 50s public that the V1 and V2 had acquired by exploding on them. Indeed it wouldn't surprise me all that much if most Brits who did know what it was prior to the Iraqi supergun doings and the background references that generated had probably first heard of it in connection with 617.

Now that I'm reminded of it I do remember reading that Clarke wrote "Superiority" inspired by the V weapons. But I still suspect (there being no contradiction involved) that it also stands as an element of warning to others not to be tempted down the same road. After all, the US superweapon project had actually worked, the temptation was apparent, and the "Sphere of Annihilation" is an awful lot like a nuke (especially in space).

200:

As if the misogynistic bro culture of SV wasn't bad enough without biased AI.

201:

With regard to the block tube/nanochain products, I must point out that the description was not in fact generated by some random process. I am deeply insulted that any of you could think that.
It was in truth the product of careful work with our Bisynchronous Universal Low-Latency Sequential/Hierarchical Information Transformer Textual Extraction/Report application suite. Available for your use as a serverless FaaS facility containerized for optimal financial performance.

202:

To me it grates. Partly for the same reason that I don't usually like made up alien language names and the like - they are weird, and they are loft hatches, so there's this horrible thing that lurks in sentences to trip you up and doesn't flow like normal words do. But partly also because they give the impression of the author repeatedly jumping up and interjecting, irrelevantly to the actual narrative, "look at me, how right on I'm being, writing all gender neutral an' that!" - not just once but every time they use a bloody pronoun, which gets a bit wearing after a page or two.

That second reason is a consequence of deliberately using a very conspicuous means of achieving gender-neutrality when the English language is so particularly well suited to doing it thoroughly inconspicuously. Singular "they" is only a little bit of what's available. It's entirely possible to write in such a manner that the question of what pronoun to use never even arises, and moreover to do it so smoothly that the reader never notices you're doing it at all; it's not even particularly difficult. When such subtlely is so readily accessible, conspicuously going to the opposite extreme and doing so in such an ugly manner pretty much inescapably conveys the impression that the ugliness is being used deliberately to ram home a pet point about use of language, whether that actually is the intention or not.

(I'd also argue that if you are trying to make a point about internalised prejudices, it's far more effective to leave them unchallenged until right at the end of the story and then compel the reader to go back and check the whole thing over word by word to discover that no, there really wasn't any evidence at all one way or the other and the conclusion they'd thought they were so sure of is in fact entirely the result of those prejudices and not something in the book at all.)

203:

Alex Tolley
Actually, returns from our election showed that the tories actually gained very few votes ... Labour simply lost them ( usually )
And, of course "Remain" got considerably more votes than "Leave" - the exact opposite of the supposed result.
Perverse ....

paws
It's a mid-period Asimov short, where the election is named after the single person who answers the multipart questionnaire that determines the election.
Can't remember the name, though.
Ah ... SFR has it, I see.

Unholyguy
I doubt the Serbian who popped Arch Duke Ferdinand thought that was gojng to start WWI
Erm, no.
Before he died in prison, Gavril Princip was asked about all the carnage he'd started.
His reply was that: "The Germans wouild have found some other handy excuse, anyway" ( Or words to that effect )

timrowldege @ 202
Extradoubleplusgood.

204:

Para 1 - That could be the case, but I normally only watch Anime in English dub, and with the camera angle, this line does match the visuals.

Para 2 - I may have been thinking of The Dam Busters, but I do tend to read multiple authors and volumes on a specific subject when they're available, so tend to not say $volume unless I am 100% that I have an actual quote from that source.

For example, in the case of the V-3 raid, aside from TDB (qv), there is at least one other squadron history, and a biography of Leonard Cheshire that I've read.

205:

Before he died in prison, Gavril Princip was asked about all the carnage he'd started.
His reply was that: "The Germans wouild have found some other handy excuse, anyway"

In which he was correct, at least about Wilhelm II wanting another chance at re-running 1870-71 (against France). Whether the UK would have been sucked in, under different circumstances (if Germany observed Belgian neutrality) is another matter. Ditto whether Turkey would have become involved, if not for some oddly specific preconditions that via a hop, skip, and a jump resulted in the western allies being unable to reinforce Russia (result: contributed to the collapse of Tsarist government, long-term disastrous consequences, in particular freed up German troops to participate in Operation Michael).

And if it had held off for another few years because the Archduke wasn't assassinated, there's a very good chance that Emperor Franz Josef would have died and been succeeded by Franz Ferdinand, with imponderable consequences for the politics of the Central Powers.

A much funnier alternate history scenario to consider is probably: let us suppose that the Black Hand miss, war does not break out in 1914, and then Kaiser Wilhelm II -- who had a tendency to piss off the neighbours in an almost Trumpian fashion -- is assassinated by a screwball German or Austrian citizen in 1916 (let's call him something unmemorable like, oh, Adolf H: he's fallen in with a bad lot, radical nihilist theosophists or something, and they give him a pistol and a mission).

What happens in Europe now, with Kaiser Wilhelm III (age 34) at the helm of Germany -- still a Prussian militarist, but less narcissistic and unstable than his dad -- and Emperor Franz Ferdinand (age 52: more liberal and internationalist than his predecessor) on the throne of an Austria-Hungary that's still meddling in Bosnia and Serbia?

206:

Charlie @ 206
Brings us back to the other great missed mission.
the chapter in B Tuchman's masterly hiostory "August 1914" headed:
"Goeben, being an enemy then flying"
Where, if the RN had caught Goeben u Breslau Turkey would have stayed neutral.

207:

I’m not sure about exactly what happens in the what if, but this is part of the background world building for the classic novel series that dieselpunk has needed ever since Porco Rosso.

208:

If that had resulted in any reduction in the major wars, the consequences for the British Empire and the rate and type of scientific development would have been considerable. Indeed, the former might well still be dragging on, losing pieces as it did, though I doubt that it would have made much difference in Ireland, except in detail.

And, as you imply, what would have happened in Russia without WW I is anyone's guess. Or Vicky's Factor VIII, but that's been covered many times before.

209:

I will note that until roughly 1930 there was a perceived risk of a naval war breaking out in the Atlantic between the UK and the USA (with an added land war along the Canadian border).

A first world war started in 1927 over US Customs intercepting a British-flagged rum-running ship, and escalating into a global war with Germany on the UK's side and France allied with the US is ... not totally inconceivable, but totally surreal to our eyes (being a re-run of 1789-1815, rather than 1866-1871).

210:

Yes, very much so, to the first paragraph. That approach is often (usually?) counter-productive. I read Cherryh's Gate of Ivriel (I think), and almost every page grated because of her use of 'thee' in the nominative! I find such distortions make something almost unreadable, and I have difficulty not reacting against the author's political message in response.

The second paragraph isn't quite right. You have to prefer the passive voice (and use similar usages), which reduces the directness. It definitely interferes with writing action prose.

And, yes, indeed, to the third paragraph.

211:

Indeed. I had forgotten about that! Given that it was both in the government's and people's perception, it's not at all implausible.

But, even without such things, no WW I would have not almost eliminated the aristocracy, and no WW II would not have finished bankrupting Britain. I can't see that the empire would have been disbanded over that timescale as a matter of policy, though it would almost certainly have started broken up.

212:

Naval war primarily meaning all-big-gun battleships. I forget whether the limitation treaties started before or after WWI. The interesting thing then would be whether 1927 is late enough in the development of aircraft and carriers, that the air-power-beats-battleships lesson from Singapore and later in our timeline would apply at some point.

Other themes that want treatment: Austro-Hungarian Navy, what side is Russia on, what side is Japan on (and what’s happening in China), are we talking Zeppelin strategic bombers or what, etc, etc.

213:

It all depends how it falls out. I can't see a UK/USA war in the late 1920s as anything other than a disaster for the British Empire -- likely to lose Ontario (and possibly Quebec), massive drain on resources, the USA was catching up/overtaking the UK in industrial output by then so able to build more battleships faster, and the potential for making mischief in the Empire (e.g. by shipping guns to Indian malcontents, or raising hell around the Pacific -- think Singapore, a short hop away from the Philippines) was ever-present. Meanwhile, there's no plausible outcome whereby the British Empire actually conquers the United States: it's just too big, too populous, and too ornery -- the best they could hope for would be a conditional victory or armistice, which would just leave a pissed-off America eager for round two a decade down the line.

Meanwhile, the British failure to decisively defeat a naval rival would have a galvanizing effect on basically every other maritime power on the planet. (From 1805 onwards, British naval supremacy was pretty much unquestioned, until it gradually transferred over to US naval hegemony from 1945 to 1956.)

214:

Lest we forget, everyone saw the utility of aviation (starting with airships, then fixed-wing and floatplanes) for fleet observation right from the start.

The USN first experimented with launching aircraft from a ship in 1910, the British commissioned their first flat-topped carrier in 1918 -- and the Japanese launched the first ship-based seaplane raid on enemy warships in September 1914!

215:

Utility is one thing, but in our timeline no-one sunk any capital ships by air attack at sea until Japanese torpedo bombers did for HMSs Prince of Wales and Repulse in 1941. One would imagine in this 1927 scenario something like that occurring earlier, and in or around the Atlantic.

The empire angle is interesting, but mostly because it has a whole forest of what ifs of its own...

216:

Oh, agreed. I was thinking as much of the politico-social aspects, where the two world wars led Britain into a form of capitalist socialism (*), Labour being as much a party of government as the Conservatives, and the consequent changes in governance and power bases. I have not a clue what would have happened if the aristocracy had not been largely killed off and the bankruptcy had come earlier (i.e. in a war with the USA), though the old order would definitely not have survived. But I don't think that we would have seen anything similar to Attlee's reforms.

(*) Sadly, no more.

217:

they give the impression of the author repeatedly jumping up and interjecting, irrelevantly to the actual narrative, "look at me, how right on I'm being, writing all gender neutral an' that!" - not just once but every time they use a bloody pronoun, which gets a bit wearing after a page or two.

For me, Gardner's use of ze/zir in They Promised Me the Gun Wasn't Loaded didn't have these problems. Kimmy/Zircon is a minor character in this book, so zir particular pronouns are just not that prominent in the book. It was not "every time they use a bloody pronoun". Most of the pronouns in the book are the ones Jane Austen used.

Also, it didn't have the feel of Gardner engaging in virtue signaling (not to me, at least). Rather, it was about delineating the character Zircon and zir friends. I never had the feeling of the author breaking the wall to lecture me about gender neutrality. Rather I had the feeling that Zircon was a sort of slightly overbearing but still likable person who would ask zir friends to do this, and they were kind enough and loved zir enough to go along with it.

As I said earlier, I enjoyed it. You, Pigeon, apparently, did not. Así es. (Now, there's a really annoying authorial yet widespread practice: peppering ones writing with unnecessary foreign language phrases.)

218:

Keynes was at Versailles at the end of WWI. No WWI, but a war starting in 1927 suggest a radically different economic situation in 1929 through to the early 30s.

Would an art-deco 20s war kill off enough aristos to force a change of guard? Can we find some other way for them to die horribly?

I think that without a significant Far East presence, there’s a whole “are we the baddies” dynamic to consider, given the USA of the time is definitely the more liberal democracy v the UK and Commonwealth of the time. I’d expect Australia to follow Britain, but with a much larger than usual popular sympathy for the USA. We basically flipped over after WWII as a direct outcome to the exigencies, etc. Not 100% clear whether that might not happen during a war as described.

219:

The Washington Treaty (also relevant to Charlie's comments) was signed in 1922.

220:

... then Kaiser Wilhelm II -- who had a tendency to piss off the neighbours in an almost Trumpian fashion -- is assassinated by a screwball German or Austrian citizen in 1916 ...

Things would have to go worse for Germany if he wasn't. As you say, he was a proto-Trump and didn't get along with anyone any better than The Donald does. I've repeatedly suggested the essay here: https://www.newyorker.com/culture/culture-desk/what-happens-when-a-bad-tempered-distractible-doofus-runs-an-empire

221:

Note the UK's ongoing constitutional crises from 1882 onwards (Irish Home Rule), roughly 1900 onwards (womens' suffrage), the Labour movement and social security, the constitutional crisis of 1911-ish (which neutered the House of Lords), and so on. While policy wrt. holding down the Empire remained more or less constant, domestic policy was all over the map, and party representation was as mind-buggeringly complicated as it is today (that is: it's not a simple two party duopoly any more, there's a strong risk of one or both of Labour or the Tories imploding catastrophically within the next 5-10 years, a member state seceding from the union, and so on).

Meanwhile, relatively recent research suggests that when high inheritance taxes came in in the UK for the first time, the nobs simply hid their wealth behind trusts and shell companies -- the richer they were, the more efficiently they hid it -- and rather than 80% of their wealth going to the state over a 50 year period, it was more like 30% (but 80% of the visible assets).

And finally, the aristocracy traditionally planned for their families to have "an heir and a spare" -- and it was usually the spare who got packed off to the western front, unless the heir had already bred. The carnage of WW1 left a lot of holes in family trees, but it was more brutal than any other war fought by the UK after 1649: an Outside Context problem for an entire ruling class.

222:

It is unclear that replacing the aristocracy by the plutocracy and demagoguery has improved anything for the hoi polloi. The aristocracy of Britain in 1900 was more civilised than it is often given credit for.

given the USA of the time is definitely the more liberal democracy v the UK and Commonwealth of the time

It's debatable, and both native Americans and coloured people might disagree.

223:

W.r.t. the last paragraph, yes. But, it wasn't just the fact that WW I often killed the heir and spare, but it often killed the title holder. What it did was destroy the landed gentry as a major power base.

224:

Damian @213:

Other themes that want treatment: Austro-Hungarian Navy

John Biggins's A Sailor of Austria is a good treatment of the Habsburg Navy in the OTL WWI. It's an excellent book, full of the horror and absurdity of war.

Biggins wrote four books in that series, received to glowing reviews but declining sales. The third book in particular (the first that I read) was on remainder tables at constantly falling prices for most of a year. The failure of the series was a real shame, IMO.

Charlie @222:

Note the UK's ongoing constitutional crises from 1882 onwards (Irish Home Rule)

Not quite the last question the Irish had to think up (as per 1066 And All That), but it (at least eventually) got what's now the Republic out of the Empire. The English Question of 2016-2020(ish) will probably lead to a Republic of Ireland that includes NI.

Elderly Cynic @223:

It is unclear that replacing the aristocracy by the plutocracy and demagoguery has improved anything for the hoi polloi. The aristocracy of Britain in 1900 was more civilised than it is often given credit for.

The hoi polloi's condition didn't really improve until post-1945. See Orwell's The Road to Wigan Pier and Down and Out in Paris and London for just how bad things were for the moved and shaken. If you didn't have money, TPTB didn't give a rat's arse about you. The more things change....

225:

...the British wartime technology program.

An interesting read on this subject is Chance & Design: Reminiscences of Science in Peace and War, the autobiography of neuroscientist Alan Hodgkin.

When the war broke out, Hodgkin was just beginning the squid giant axon work that would later make him famous. He broke that work off and left his post at Cambridge to join the war effort. It was a great blow at the time, because KC Cole at the Woods Hole Marine Biological Laboratory had begun making progress recording from giant axon, and it seemed he was throwing away his chance to be the first to make great discoveries.

Because of his expertise with electronics, he was put to work on radar. He designed an airborne radar system that could be carried on a fighter plane and used for battle at night. (This system is described in loving technical detail in the autobiography. It was mostly ingenious electromechanical engineering.)

KC Cole, for some reason, made no progress during the war years, so when Hodgkin returned to Cambridge in 1945 the problem of how nervous system electricity works was still wide open. He was soon rejoined by his former student Andrew Huxley (who had also been gone off to the war), and they took up the giant axon work again. They developed voltage clamp (the papers show the circuit diagrams of their voltage clamp amps, which were based on tubes/valves). The work they then did resulted in several beautiful papers that elucidated the chemical basis of neural electricity. These papers are still read by neuroscience students today, and are a joy to those of us who like to believe that scientific papers don't HAVE to be badly written.

226:

That was my main point. The change improved nothing until Attlee's reforms.

But that was not universally true for the dependents (including employees etc.) of the old aristocracy in (say) 1900, almost all of which were rural. See Kipling for what the better ones were like, and there were more like that in the country than is now admitted.

Simplifying, what the change from the aristocracy and landed gentry to the plutocracy did was to replace the landholding of such resident gentry and yeoman farmers by non-resident landowners, who cared little for either the land or the people who lived there except how much money they could get from it. I saw the effects of the post-1945 punitive inheritance taxes on such people, and it was tragic, especially for the ecology.

227:

EC @ 223
Unusually I am in 150% agreement with that!
.... but @ 227
Not quite.
Some serious social reforms were instituted during WWI, by L George & others, to stop open exploitation & of course, WOmen's suffrage immediately it was over ....
Agree about the latter ...
A single aristo, or family are subject to "peer" pressure & social ostracism ... but a faceless AI of a corporation?
Much harder to move, as we have been discussing.

228:

That also ignores various European potato famines, and the (Scottish) Highland clearances (1800s CE), and that's just the tip of reasons why the analysis is weak.

229:

In the U.S. it's called The Guns of August, and it's one of my favorite history books.

230:

Eh? What on earth do they have to do with what happened from 1900 onwards? Obviously, I am not claiming everything was sweetness and light by then, but blaming a group for the sins of its ancestors is NOT civilised!

To Greg Tingey (#228): you are right, and I should have said that there were improvements in the period 1900-1944, even if not on Attlee's scale.

231:

As I've said before, what my late wife and I decided we wanted, back in the early/mid-nineties, was an artificial stupid. "I know what to do with this, I know what to do with that, um, hey, boss, I don't know what to do with this - what do you want?"

As opposed to the "smart" everything, that's *sure* that they know what you want, or, esp if it's from M$, they know better than you.

232:

And Russia will just stand by, I'm sure. And China has none in shipping containers on ships *now*, Riiight.

No one "wins" a nuclear war. We all die.

233:

ROTFLMAO! Love it.

234:

Sorry, we disagree. On the one hand, "they", based on the amount of German in English, and on the Deutsche Sie and sie, has been my entry in the non-gendered third-person singular sweepstakes since at least the early eighties. (Since I see that Webster has accepted it as that this year, I won, thank you.)

On the other hand, I also liked it because I *really* dislike stupid neologisms like xe or je.

On the other, other hand, sometimes coming up with the right word, when you're writing, is *hard*. I was just asking on a mailing list about a word to use when I'm dealing with two interstellar governments. Also have spent hours trying to verbalize how someone from the future (+150 yrs) would talk about connecting to someone through something 150+ years more advanced than idiot, er, "smart" phones and the 'Net. AND I want to do something that doesn't sound stupid, made up, and won't look stupid 10 years from now.

Say, like "clicky" in the old Buck Rogers' strips.

235:

I'm of two minds.

Personally I don't mind they in common use, so long as its meaning is properly understood.

In writing, I've used fae and ona.

Here's the scoop. Fae was my rather obtuse take on a society's homebrewed gender descriptions. They were an offshoot of the radical fae, and my conceit was that some of their kids really wanted traditional gender roles. So the fundamental split in the society was whether you accepted traditional WEIRD gender roles, in which case you were he or she, or you did not, in which case you were fae. Honorifics include Miss, Mister, or Fair. I'd even go so far as to posit that a woman or man could have a fairy partner of any sex, or two (or more) fairies could marry. I don't expect it would ever catch on anywhere outside a story, but I think it's worth recycling when I resurrect that world someday. It was a way to express how a group controlled their identity in profound ways, rather than conform to outside preconceptions. This, incidentally, was a primitive society on an alien world, so marriage wasn't just about love, it was about divvying up the necessary chores for living and raising a family. In such circumstances, there are good arguments both for and against having traditional roles for people to train their lives around, and for divvying up tasks based on talent, so long as partners' talents complemented each other and nothing essential got left undone. This group tried to have it both ways, while actually privileging the latter where possible.

Ona is something I've swiped from toki pona, a conlang you can look up online. Toki pona was designed as a minimalist language, so it has first person (mi), second person (sina), and third person (ona), but no gender or number, so ona can be translated as he, she, it, or they depending on context. It's perfectly suited to what I'm writing, but it gets a bit weird to use. This is another case where I don't expect it to catch on outside a story, but it's fun inside.

Why am I flexible about this stuff when I'm not a gender activist? Well, when you study plants and fungi for any length of time, notions about what constitutes normal gender patterns get a little flexible. After all, plants default to bisexual individuals, and fungi range from asexual/cryptosexual (extremely common) to having over 23,000 mating types (Schizophyllum commune, also extremely common).

236:

Out of the blue my news feed dropped a pretty interesting story from IT world about modern state of AI and machine learning today, namely the issue with modern neural network science. Uh, well, the original article is in Russian anyway, but I googled enough to find similar publications elsewhere about the same problem. It's not all that good. The singularity might have to wait a bit, actually.
https://habr.com/ru/post/480348/
https://science.sciencemag.org/content/359/6377/725
http://new-savanna.blogspot.com/2019/09/houston-we-have-problem-reproducibility.html
https://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

Long story short, a lot of the recent science is affected by the same malady that pushes people to get results and discover something while not really caring about actual potential of that science, but rather about being first. I may guess this is not a new problem, but in the age where you can submit results directly as often as you need, the competitive factor is overcoming reason and purpose.
https://en.wikipedia.org/wiki/Publish_or_perish
I'm not really into science and wasn't as bold and single-minded about it, but even I understand the crucial point of this moment - if the science runs into stalemate despite widespread public interest, the entire "revolution" is again under the threat of not giving. Which means that billions of investments spearheading into that area will produce nothing but a trash. (A trash like 3lon M4sk would be a good examples, if you have any second guesses)

What is interesting here is that the quality of the representation of neural networks has fallen through the floor, so much so that people are reluctant to explain and demonstrate how their code works, not to talk about what it does and to the point of not including it into their submissions. It goes like "this is a code, it does stuff, give money pls" - and not even concerned about it doing anything useful outside the experiment bounds. Long story short, the author says that only about 17-20% of the works are even having any code at all.

Even more interesting that it is supposed to be Computer Science, but now it operates in the same field as natural sciences like psychology and medicine. When was the last time we really had a revolutionary breakthrough in this area? Anyway, i think we can fairly agree that those "AIs" we have are as much of a black boxes as our own mind and without stern guidelines they will not be "intelligent" at all. If you think about it, it goes even further. Remember all these statements above about these AI systems screwing up towards social-economical biases? Now, how are you going to prove and reproduce that? Who can still guarantee that some of these messages is just not another layer of fake news designed to discourage people from researching real science and instead shoehorn them into doing their bidding? Transparency is the larger issue in the informational society that one might ever realize, and it's going to ever increase especially if so many people think they can grasp it intuitively.

237:

Nothing could be less surprising.

238:

I see; I was thinking in general terms of how I have seen such coinages used, which is pretty much "all or nothing". Sparse usage as a part of the delineation of a specific and minor character, such as you describe, is a distinctly different technique and one I would probably find rather less aversive.

239:

Paul @ 15: The question then is, can liberal democracy avoid a legitimation crisis, not of the current leaders, but of the concept itself. When most people view most other people as not merely mistaken, or even deluded, but actually malign, how can anything like democracy survive?

What do you do when the other side actually is malign? How's the old saying go? You're not paranoid if they really are out to get you.

240:

LAvery @ 52: I've been musing on the title of this essay, particularly the contrast "Threat or Menace". What distinction is being highlighted here? To me the words mean very nearly the same thing (with some subtle shades of difference). The essay itself uses the word "threat" or its derivatives several times, but "menace" appears nowhere except the title.

Wouldn't "threat" be danger some time in the future, while "menace" is danger right here and now?

241:

timrowledge @ 91:

@76 - “Does it have nanotubes and use blockchain?”

Hell, my company has it with nanochains and blocktubes. Does the whole Dymaxion CyberNeuroExpert thing really well. Provides ongoing best-in-class optimaxualisation of resource matrix acceptivity parameters for whole-problem visualization and transitivual solutionising.

Yeah, but what the hell does that mean if I need a thousand #2 Left-handed Widgets & they got to be delivered by next Thursday?

242:

Wouldn't "threat" be danger some time in the future, while "menace" is danger right here and now?

If you google "What is the difference between a threat and a menace, you'll find at least half a dozen different answers. None of them is convincing. In fact, you'll find pairs in which answer one says, "Threat means A, menace means B", and answer two says, "Threat means B, menace means A".

Clearly the correct interpretations of Charlie's title were given by Allen Thompson (@56) and Scott Sanford (@57).

243:

I'm not sure we do disagree; it looks to me as if we're in fairly close accord. I was not decrying singular "they" - I've been using it naturally myself for as long as I can remember. I'm saying that one thing that makes those neologisms so hideous is the gross unsubtlety and conspicuousness of their use in a language which is already notable for being unusually well provided with natural and mellifluous means of achieving the same end (singular "they" being merely the easiest and simplest of many).

Trying to ensure that words by which characters with future technology refer to establishing communication are proof against sounding stupid and made up is a similar class of problem, in that a frequent cause of failure is the chosen coinage having the effect of overemphasising unimportant factors and stealing the scene from the aspects that do matter. Almost certainly the important point will be that a conversation is taking place between the president and the madame, not that they're doing it using the same kind of comms gear everyone else uses. To insist on saying "the president twotincansandapieceofstringed the madame" may be more technically accurate but it also calls a distractingly excessive amount of attention to the scenery instead of the action. As you say people do occasionally manage to come up with a word that works, but most such words don't really work all that well, and I reckon the ones that do are the product more of luck than of ingenuity.

I'd probably decide to solve the problem by avoiding it :) After all, "call" has worked for every development of the technology since plain yelling, and it seems reasonable that it should go on doing so. Or I might use some existing slang term which is known more widely than it's used, and which either makes no reference to the actual technology or is already grossly inaccurate but no-one cares, like "to get someone on the blower".

244:

The basic advantage of democracy is that it allows a peaceful transition of power. Authoritarian regimes, whether aristocratic, plutocratic, oligarchic, or autocratic, tend to have violent power transitions.

In other words, in a democracy, a president can be impeached without fearing that his spouse, children, and relatives will be put up against the wall in front of a firing squad. With a dictator, said firing squad is a common way to end things. Similarly, faction fights in aristocratic regimes tend to be bloody.

Today's super-rich and would-be president-kings seem to have forgotten this. I'm not sure they want to live in a world where, for the US, it's a rerun of the civil war every time someone gets too weak. They think they do, but most of the people living with that fantasy have only played with guns. They've never been under fire in return.

Otherwise, any system can be corrupted, and authoritarian systems seem to get corrupted even faster than democracies do. Spreading out the power seems to slow corruption, and that's the best argument for a liberal system that I know at this point.

245:

any system can be corrupted

I'm currently reading Lying for Money by Dan Davies, about fraud, and it's interesting how you can get a situation where illegal and fraudulent acts are committed because the system has been changed to incentivize that, and yet the changes are not made with that intent. Ie. you can have a system that encourages criminal behaviour without there being a criminal mastermind. (The example he uses is the PPI scandal.)

Criminality as an emergent property looks likely to continue unless we can, somehow, hold management to the same standards that engineers and doctors are held to in terms of the consequences of their decisions. (Eg. set poorly-paid and -trained sales staff nearly impossible goals for upselling, you are responsible for the inevitable fraud and other chicanery.)

246:

> what my late wife and I decided we wanted, back in the early/mid-nineties, was an artificial stupid. "I know what to do with this, I know what to do with that, um, hey, boss, I don't know what to do with this - what do you want?"

The reason that computer assistants don't already do that isn't (usually) because no one thought of it, or because the programmers are arrogant enough to think that their programs are perfect. It's because they literally can't tell which problems they "know what to do with" and which they don't.

Computers are suffering from the Dunning-Kruger effect: they don't have the meta-knowledge to understand where they perform well and where they don't, and therefore they don't know when it's appropriate to ask for help.

The so-called AIs are not smart enough to act like your "artificial stupid".

(Of course, you can turn the slider all the way towards caution and have them ask about everything--but then you just get the "dumb" software you've been using for decades. So if that's what you want, you've already got it.)

247:

"given the USA of the time is definitely the more liberal democracy v the UK and Commonwealth of the time"

Definitely not.

Where I lived in the USA wasn't even in the South - it was Indiana. But the Klu Klux Klan ran Indianapolis in the 1920s - their candidate for mayor always won (and appointed the police), their candidates as judges always won, etc. They were the police, the judges, the prosecutors.

It Was Nasty.

Nor is this just a democratic "tyranny of the majority" - Southern States that were majority black somehow managed to never have a black governor, or senator, or judges, or etc.

248:

Well your statement is trivially not true given the US has already won one nuclear war and we didn’t in fact all die

No one wins a nuclear war between the US and Russia that I’ll grant you . But a China that actually only has 300 odd warheads and a really odd philosophy around nuclear deterance, that I’m not so sure anymore. Assuming their not lying of course but it’s be an odd sort of lie that encourages rather then discourages someone else from nuking you.

The unthinkable only stays unthinkable up until someone does it, and I don’t doubt the US us capable of such an act

249:

"Computers are suffering from the Dunning-Kruger effect: they don't have the meta-knowledge to understand where they perform well and where they don't..."


That's a really well put. I hadn't thought of the connection between Frame Problem type issues and Dunning-Kruger.

It's the "Out of Context" Problem.

250:

My recollection is of some article about German WWII prisoners of war being kept in the south of the USA...ie...actual Nazis...being kept in labor camps.

Apparently, many of them were shocked by how much more poorly African American citizens were treated than actual enemy combatants...

251:

What really scares me are the computer systems used to design integrated circuits. It is known that humans do a far better job of laying out circuits than the automated layout tools. However a device with 20 billion transistors is far too complex for a human to design so the automated tool is the only practical choice. It only makes sense to integrate machine learning into the routing and layout tools. Those tools will be used to design the next generation of processors. Those processors will incorporate machine learning themselves, and will be used to run the next generation of routing and layout tools, which will be used to design the next generation of tools, and so on and so forth. I worry that one day we may look at our shiny new 100 trillion switch processor, and realize that while it does everything we want it to do,faster than anything previous, it's also doing something else unintended. Maybe that something else will be harmless. At least I hope it's harmless.

252:

"It's because they literally can't tell which problems they "know what to do with" and which they don't."

Hmm, not so much. That does apply to things like self-diagnosing hardware, which tends to suck because it can't tell the difference between a report of a fault and a faulty report. But too often the problem is that the program can decide what it does and doesn't know what to do with, but the decision parameters have been set to "be helpful", which is short for "barge on regardless because the user is defined to be too ignorant to make any meaningful choice". A crude example of "be helpful" vs. "ask for help" is Windows vs. Linux installation tools; when Linux installation tools find existing partitions on a hard disk they ask you what to do with them - delete them, resize them or keep them as is, do you want to set up dual boot, and other such useful options, whereas Windows installation tools just assume that "delete" is the only possible valid answer, so they don't bother to ask and just nuke the lot regardless (or at least they did the last time I had any contact with them).

253:

Antiston & whitroth
Yup
"Our programme &/or fault-reporting systems cover ALL eventualities" ...... so we don't need to put a box marked "other" with a comment-box afterwards.
Ran head-first into one of those earlier this year.
Took about 5 goes round the loop, before we could get a real, actual HUMAN to pay attyention to the problem.
Grrr ....

P.S. Fucking TfL are "good" at this, as well.
Ther appears to be no way at all to report faults or omissions on their web site.

254:

Oh, they do that already, they always have done it. Can even turn out useful, especially if you don't mind being naughty.

255:

Ah, no... that's actually the opposite problem; not "be helpful", but "be unhelpful". The function of the shiny automated front end is to act as a display they can point to as incontrovertible evidence of how much effort they're putting into making it easy for people to complain (look at all the options!), while actually being so useless that most people get sick of it before it gets sick of them and give up without ever reaching the human, so they can claim they're not getting very many complaints so they must be doing OK, and also don't need to employ so many people to be the human in the first place.

256:

Working in consulting and large scale business transformation for the past three decades, I've often thought that the biggest challenge with ANY decision system is not "is it right" but "is it reliable" - because mostly right is pragmatically fine, with regular course correction. This works whether the decision is inherently local (what are *we* eating for dinner *tonight*) or more global (should *our* diet include red meat). The answer should always be directional and incremental (as in identify steps to take to get there. Even catastrophic cusp events [from smoker to non-smoker, for example] require incremental action...one step at a time, even though "that first step is a doozy".)

Every transformation project and process that has worked for longer than the initial rollout has relied on a baseline principle that there has to be competing versions of "decision processors" - e.g. a third party consultant, an internal "decision support group", and the board, is a common triumvirate for major decisions. Each group has access to the same (or vastly similar) environmental & business information, and has been provided a set of goals (current direction, or improvement metrics). Each will therefore recommend subtly or hugely different actions.

The immediate value comes in when they agree: that action is probably more right than others.

The real learning pay-off for organizations is when they disagree -- that provides feedback (just as in a GAN) to each of those networks to their individual biases, and redirects them all. But not all in the same direction (that bias might be antithetical to the bedrock principles of a group, and so will simply inform the decision making process of additional countervailing arguments than needs must be addressed in future recommendations), and it provides opportunities to identify potential game-changing state transitions (to exploit a brand new opportunity space)



257:

most of the people living with that fantasy have only played with guns. They've never been under fire in return

Ahem.

Next time around, it won't be played out with guns: it's going to be wall-to-wall slaughterbots (watch the video).

258:

Or at least, mostly harmless.

259:

But a China that actually only has 300 odd warheads and a really odd philosophy around nuclear deterance, that I’m not so sure anymore.

AIUI the Chinese deterrent posture was based around having the capability to rapidly roll out a deterrent, rather than actual possession of a large one, because the CPC didn't trust the PLAN (who would operate any nuclear weapons). Hence about 20 ICBMs, a single SSBN, and a fistful of theatre nukes to deter the USSR from invading by land. (The ICBMs and SSBN kept the operational expertise on tap without triggering an actual arms race that would cost China an outrageous amount of resources desperately needed for development instead. And were sufficient to assure DC and Moscow that while they could wipe out China, they wouldn't be able to guarantee doing so without taking casualties.)

This may now be wrong. China lurched towards ethnonationalist authoritarianism at the same time that it hit its peak workforce:dependent ratio and graduated to being a first rank global industrial powerhouse. If the CPC want a thousand nukes, the only thing stopping them is internal considerations (e.g. how to guarantee the PLAN don't go off the reservation and do something stupid with them). Which is probably a non-issue these days, given the extent of internal surveillance ... and also a lack of any perceived external nuclear threat.

260:
it also calls a distractingly excessive amount of attention to the scenery instead of the action.
Which might, after all, be the point. I'm thinking of Graydon Saunder's Commonweal books, which use singular "they." You think it's a case of "strip gender markers and let readers see what they assume" - and then you see a character use "he" and how people react and realize in fact it's telling you things about the society.
(I suspect it might be both. It's not a series where there's only one thing going on much.)
261:

However a device with 20 billion transistors is far too complex for a human to design so the automated tool is the only practical choice.

Machine learning isn't your only threat: you need to worry about stealthy dopant-level hardware trojans injected via software corruption of your design tools -- imagine the technique described in Reflections on trusting trust used to inject an entire 486 or Pentium class microprocessor and its own boot ROM into the dopant layers of the Management Engine of a next-generation chipset.

The chipset passes all verification tests, including verification of the IME ... until it receives some trigger condition, at which point the million or so transistors hidden in the dopant layers wake up, take over the IME (gaining ring level -3 control over the entire hardware platform) and go looking for a network connection over which to download new orders.

You can't find this attack by testing. You can't find it by examining the source code of the design tools or the microprocessor itself. You can't even see it under an electron microscope: you won't even know it's there unless you are very carefully examining every network packet emitted by your motherboard every time it wakes up or the clock ticks, and the bad guy behind the infiltration thinks its time to take control over their horde of zombies.

Note that while Intel chipsets use a management engine (running a canned version of Minix, ironically enough), ARM chipsets don't currently ... but often need to load opaque executable blobs to set up their graphics and other subsystems (e.g. baseband processors). And top-end ARM isn't that much simpler than ia64 architecture these days: quite possibly ARM is vulnerable to an equivalent attack.

262:

That. Was. Not. A Nuclear. War. It was an end to a conventional one. You don't have a boxing "match" when one person stands there and gets beat on.

And do you really think, say, 280 of the 300 nukes China may have go off, that they are the only nukes going off, and that they're the size of Hiroshima (20kt), and not 1MT, and not MIRVed? And there aren't 300 cities of 1M+ in the US....

If you think *anyone* "wins" a nuclear war, you're dangerous, and a fool.

Anyone who uses, or authorizes use of a nuke should be killed, along with their entire chain of command, and everyone in that chain of command's family.

263:

You hope it's harmless. Yeah, well, look up the MELTDOWN and SPECTRE vulnerabilities that were all over the trade media a year or so ago.

And that's assuming that no one's hacked into the company's network, and played with the control programs.

264:

Y'know, I have a language problem: saying "thank you", for example, to someone for telling you what you REALLY didn't want to hear or know.

And, of course, we know just how well facial recognition works - I mean, unless you're black, or wearing a mask....

265:

..they can claim they're not getting very many complaints so they must be doing OK..

Once upon a time, at an unnamed company (Cisco), a new bug reporting tool reached Beta. There was a general call for everyone to kick the tires. I was one of the usual suspects, so I did, and I reported some flaws to them. My reports met prompt pushback, with semi-polite comments about how they were concentrating only on Windows issues. They didn't even have any Linux boxes, with which to verify my reports. They'd get Linux boxes later. So, effectively, please go away. I did.

Fast forward most of a year. Managers were now pretty happy with the lookups and reports from the Windows boxes, and the Linux world (AKA engineering) weren't complaining, so it must work, right ? So they shut down the old tooling and cut over. And the ordure met the rotating device, really big time. The people who actually needed a bug tool, so they could find and fix the bugs, couldn't get their work done. Bug tracking is a bigger issue than it sounds: the source code repository supported branches, and products weren't just being shipped from one master branch, they were being shipped from dozens of branches. Development happened in about a hundred branches, which were forever being merged and forked.

Gaah. Well, the responsible VP got fired, which was small consolation to the frantic peons.

266:

The whole point of a decapatation strike is to make sure your opponent never gets to use their nuclear weapons

Protecting against such was why both the US and the USSR developed such an obscene number of nuclear weapons . They assumed some large percentage of them (around 80%) would be taken out by an enemy first strike and wanted to have enough left over for retaliation

300 odd warheads (not missiles, warheads) simply isn’t enough to protect against such and it is bound to give people ideas. If a US first strike accounted for 80% that leaves 60 warheads for the anti ballistic missile systems. That seems within the realm of winnable.

I’m sorry if that offends your religion but I guarantee the war planners at the Pentagon don’t share your convictions

Which would make the odds of China launching some massive cyberattack against the US pretty remote IMO. They know they are vulnerable is the shit seriously hits the fan

267:

I hate to say it, but the real reason is a lot simpler: they hire people right out of school, and the longest program they've ever written is

They're shocked, shocked I tell you, when people scream bloody murder. Then they, and their management, mostly "it's always stupid users"....

The programmers, of course, are not allowed to ever talk to end users.

268:

Let me add one more thing: to me, I feel as though they're not "activists" as much as terrified of hurting someone, and everyone's being told that things should be considered hurting (note that I'm very heavily including right-wingers, who throw around the word "snowflake", but are utterly unable to handle criticism.)

I'm not a snowflake. If someone were to accidentally address me as "ma'am", I'd be amused. When people insist on me giving a pronoun, my answer is, "anything except late for dinner".

269:

Agreed:- I have one piece of software (ada) with one subprogram that reads:-

Procedure My_Proc ( parameter list ) is separate ;
-- My_proc is about 2_000 lines; this is clearly far too long, but I've tried everything I know to shorten it, and at least this makes the rest of the package body legible.

270:

Argh. I was lucky - never had to deal with ADA.At any rate, so much of that are folks who have no idea what they're doing.

When I worked at the Scummy Mortgage Co, I shortened a COBOL program from 2200 or so lines to 600. All it did was print a 12 or so high set of 6 numbers on a label for a manilla folder. The senior programmer, who'd been a keypuncher, didn't understand arrays, not how you move a structure....

271:

I was lucky - never had to deal with ADA.At any rate, so much of that are folks who have no idea what they're doing.

Nowadays I do most of my coding in python3, which is truly a joy to use.

272:

Anyone who uses, or authorizes use of a nuke should be killed, along with their entire chain of command, and everyone in that chain of command's family.

Well...

Not that I'm for the use of nuclear weapons, but we've got two situations here.

One, as you pointed out, was the US use of nuclear weapons on Japan in 1945. I think the documentation pretty conclusively shows that this was the correct move on Truman's part, not just to end the war, but to minimize casualties. The casualties caused by not using nukes and running Operation Downfall (the conventional invasion of Japan) would have been an order of magnitude higher on the Japanese side, and probably double the total death toll on the US side for WWII. It's sick, but the nukes saved lives in that one particular instance.

Nowadays, nuclear weapons are for deterrence. Everyone who's sane and intelligent knows that using them on anything other than an incoming asteroid is probably a death sentence for himself and his family. However, we're in one of those interesting Mexican standoff situations, so we do need to mutually assure each other's destruction. The twisted part of this whole equation is that the bluff only works if no one in the chain of command is bluffing. Except perhaps the guy who can set it all in motion.

273:

Next time around, it won't be played out with guns: it's going to be wall-to-wall slaughterbots (watch the video).

You know, it's possible there's a fairly cheap deterrent for these machines, although they're illegal in the US, so I don't know what these actually do.

If you don't want to jam, putting a decent faraday cage around your structure would make it much harder for these machines to work properly inside.

And, since this is an IoT device, I'd love to know what the security on the slaughterbot's operating system is...

274:

Ooooh i used to follow Kill Six Billion Demons, thank you for the reminder!

275:

I'm sorry, but that "pet point" is, in fact, some people's pronoun. Some of them are people I know, who are manifestly human beings and not science fiction novel characters or political point-scoring about the English language. In point of fact, people sometimes write or speak about *me* taking great pains to avoid using any sort of pronoun rather than using the correct one, so I can tell you that that approach fucking hurts. And my pronouns are the anodyne she/her; anodyne, at least, for cis women.

It is quite often that people in no position to have to care prefer not to be reminded that they are uncaringly hurting others except perhaps in the subtlest of ways. You are not unusual in that regard. It is also not unusual that people dealing with the pointy end of the problem prefer to be more vocal than that.

276:

OK fair enough the specific use is *new*. As of 2009 :P

Point remains: we DO have one. It's rather famous at this point what with the M-W "word of the year" thing, and not to mention the many thinkpieces stretching back years now.

277:

It's entirely possible to write in such a manner that the question of what pronoun to use never even arises, and moreover to do it so smoothly that the reader never notices you're doing it at all; it's not even particularly difficult.

TBH, I do not believe this statement is true, when you are talking about the thoughts and actions of a person from the point-of-view of another person. (In English, that is. In Japanese, sure.)

278:

Sure, AIUI with Japanese the question not even arising is pretty much built in to the structure of the language, whereas with English it takes a bit of planning to make it happen smoothly, but it's still possible. Thing is that with any technique such as this where part of the definition of "doing it well" is that you don't notice it's being done at all, the instances that you do notice are those where it's being done badly, so your appreciation of how successful it can be is inevitably pessimistic.

I ought to post a link next time I come across a good example, but the idea of an example that is noticeable enough for me to spot it cold but not noticeable enough for you to spot it when I've told you what to look for is an obvious contradiction. :)

279:

with English it takes a bit of planning to make it happen smoothly, but it's still possible (emphasis added).

I am completely convinced that you believe what you are saying. But I am also convinced that you are quite wrong.

I will add, BTW, that I mean I believe you are wrong in that your statement is true only in a sense that makes it vacuous. That is, it is possible to write such that "the question of which pronoun to use doesn't arise" in those cases in which it is possible. But if you have some particular thing you need to say, and you don't get to choose what that is, and you have a reader you want to communicate with, and you don't get to choose who that it is, I don't believe that you can, for every case (or even most cases) of those two things, "write in such a manner that the question of what pronoun to use never even arises, and moreover to do it so smoothly that the reader never notices you're doing it at all".

And since you have just stated that you cannot possibly produce any convincing demonstration of the phenomenon, we are at an impasse.

You know, it is my belief that you don't know anything unless you know how you know it, and can explain that to someone else.

280:

BTW, natural-sounding gender-neutral writing in Japanese or Korean is not actually easy. But pronouns are not the problem.

281:

You know, it is my belief that you don't know anything unless you know how you know it, and can explain that to someone else.

Yes, that's why it's so very easy to explain the feelings of sexual intercourse to someone who's never done it before. There are similar gendered issues around genitalia with things like physical damage and menstruation that are similarly difficult to explain to people who have not (or cannot) experience them.

Not all knowledge can be explained.

If you want a less fundamental example, look at the role of dance in communicating hunting knowledge in non-literate cultures. "This is how an emu or red kangaroo moves in this situation" is easy to pantomime, but difficult to impossible to explain in words. This is one reason why it can be really difficult to train literate wildlife biologists to the standard of non-literate hunting and gathering people--much of the vital knowledge cannot be conveyed through western pedagogical techniques, and issues like billable hours and social norms prevent the traditional learning methods from being deployed in most circumstances.

And I won't even go into the difficulties enlightened Buddhists have in training their students to replicate the achievement.

282:

Not all knowledge can be explained.

I agree. What I said is that you should know how you know what you know, and be able to explain that to someone else. For instance, "I know what sex feels like for a man, or at least for me, because I have had sex." satisfies the requirement I stated.

Understand now?

283:

You know, it is my belief that you don't know anything unless you know how you know it, and can explain that to someone else.

The first part of the statement is bogus. Consider someone with normal vision explaining colors to someone who is unknowingly colorblind. By your definition, the person who has normal color vision does not know what colors look like, because he has no knowledge that the person he's explaining something to has no clue what colors look like, and colors are normal for him. Only once the colorblind person is diagnosed and have the concept of colors painstakingly explained to him can the first person be said to know anything about colors, under your definition. This is completely absurd.

As for the second part, anyone who has been any part of a non-disclosure agreement legally cannot say what the source of their information is. That seems to eliminate a large body of knowledge from your criteria.

Heck, by your definition, the Pueblo Indians, who regard knowledge as more highly controlled and owned than American society at large does (under notions of intellectual property) know very little about anything, precisely because they will no longer share their intellectual property with outsiders having seen it be abused in the past.

And I won't even go into the issue of abuse victims being required to recount their experience in order to qualify for your stance that EXPLAINING the source of their information is HOW they know.

Controlled, private knowledge is still knowledge, and knowing how you know is less relevant than you think, because your statement depends, not on the information in your head, but the information in other peoples' heads.

284:

no reference to the actual technology or is already grossly inaccurate but no-one cares, like "to get someone on the blower".

I always thought a "blower" was an actual technology. Isn't it a term for a speaking tube capped with a whistle at both ends? To talk, you remove the whistle from your end, give a good hard blow, and the whistle at the other end attracts the attention of the person you're calling.

285:

OTOH I always took 'blower' to be a slang term for telephone.

286:

I'm not really sure how that relates to my post. I'm not talking about interacting with real people, I'm talking about writing fiction. If a real person expresses a desire to be referred to using neologous pronouns then sure, I'll use them; I might forget, but I won't avoid using them on purpose. On the other hand, if I'm writing a piece of prose in a gender-neutral style, then I will deliberately avoid using neologisms, in favour of achieving neutrality by some less obtrusive and uncouth device (choice depending on what kind of style I want to achieve).

Nor am I making any comment on the value or correctness of the implied message. As it happens I do agree with the desirability of using gender-neutral language, but that is irrelevant. I'm purely talking about the ugliness of its delivery, both the primary ugliness of the neologisms themselves and the secondary ugliness of the intrusion into the narrative by the implication of a message inherent in their use. (The obtrusive ugliness ensures that the implication of a message does exist, whether or not the author actually intended it to.)

I am objecting to the reduction in beauty conferred upon the artwork by the ugliness of the deliberate choice of a clumsy and awkward new method to do badly something for which there are already plenty of smooth and dexterous methods for doing it well. I have a very similar objection to the unskilled formulaic excretions of far too many of the species so inappropriately referred to as "web designers", strong enough that it is not even overshadowed by the existence in that case of the rather more obvious objection "it doesn't fucking work"; the nature of the objection remains the same despite the political and personal contexts being utterly different.

287:

When I see the term "Blower", I think supercharger, as in "Blower Bentley", or GMC 6-71.

288:

(also gasdive @285) Yes to both :) The original meaning was a whistle-capped speaking tube, which makes it a grossly inaccurate term to use for a telephone, but that didn't stop the later meaning from taking over from the original one almost completely.

289:

"You know, it is my belief that you don't know anything unless you know how you know it, and can explain that to someone else."

I thought I had. I can't demonstrate it, but that's because the fact of our having had this conversation has buggered the conditions for a meaningful demonstration in advance. Never mind.

But in general, there are a lot of things I know but can't explain. Such as things that were briefly important in a non-recurrent context and arise from some corpus of knowledge I'm not interested in. I'll check them out at the kind of level of seeing whether they're built on rock or sand, and record the conclusion along with metadata on confidence level and quality of evidence, but not bother about whether any of the background material records itself or not, which because I'm basically not interested it usually doesn't to any great extent.

I prefer the related adage "the best way to learn something is to teach it to someone else", because I do find that helps a lot with clarifying something I don't have adequate records on.

290:

As it happens I do agree with the desirability of using gender-neutral language, but that is irrelevant.
I'd prefer that it be the norm, TBH.
When I see "ze" in the comment sections here, I read it not so much as gender-neutral as "please note: probably not binary". That's not normal usage but I don't much care.


291:

BTW, natural-sounding gender-neutral writing in Japanese or Korean is not actually easy. But pronouns are not the problem.

I think it's easier in Finnish. We only have two gender-neutral singular third person pronouns, and the language as a whole doesn't have as much difference between men's and women's speech as does Japanese (and probably Korean). I speak some Japanese, but no Korean, so I know more about it.

As for writing English fiction gender-neutrally, I liked the 'Lock-In' and the 'Head-On' by John Scalzi (which are written in first-person perspective gender-neutrally) and those Ann Leckie books with 'she' as the only third person pronoun. Both these examples felt a bit gimmicky, though, the Leckie books more.

In normal life, instead of fiction, for me the pronoun thing is like somebody's name: if they tell me they'd rather be called by these words, so be it, I'd be an annoying person (at least) if I wouldn't do that. For me the gendered pronouns seem a bit strange, even after 35 years of learning languages that use them.

In Finnish, the third person singular pronouns are 'hän' and 'se'. In formal (usually) written language, 'hän' means a person and 'se' means an animal or an object, but in usual spoken language at least in my vicinity, 'se' is the usual pronoun for everything. Pets and farm animals are more often 'hän' than people in my daily life.

292:

"...to inject an entire 486 or Pentium class microprocessor and its own boot ROM..."

I don't think the attack in that link is capable of doing that. Their technique doesn't insert new transistors. What they're doing is selectively buggering up transistors that are already there. This is pretty limited in scope, since it's pretty hard to find transistors that you can get away with buggering up without someone noticing that whatever function they're supposed to be part of isn't working any more.

The first attack gets around that using the unique property of a random number generator that it's not possible to decide whether any individual number it gives is "right" or "wrong", so they can mildly fuck it and thereby weaken keys generated using it but nobody will notice anything different unless they make a point of analysing vast numbers of results. The second one doesn't really fuck anything at all, it leaves it all working quite normally but generates a signal invisible to normal hardware that in conjunction with extra hardware that can detect it leaks crypto secrets one bit at a time. They're both examples of ingeniously selective crippling that has an extremely weak effect that does nothing by itself, but relies on some high-gain computational amplification by some additional external means to become significant.

The first one is potentially more powerful because it does actually change the chip's functionality, but to get away with that it relies upon the chip having a special extra bit that doesn't work like normal bits so it's not obvious it's fucked. I suppose if you were very very clever and had far too much time on your hands you could figure out a way to play with dopant levels that would cause an entire new processor to appear as an emergent property of the aggregate of little fucked bits, but you'd need a staggeringly humungously vast area of silicon devoted to random number generators and things of equivalent weirdness to hide it in.

What you could do by some elaboration of this technique is fix it so that one specific pattern of a suitably large number of bits causes things to go phut and permanently alter the processor's operation. Most easily of course just to stop it working altogether, but with rather more effort to effectively turn it into a different processor entirely (albeit of considerably lower capacity) which you can then run your own code on and the sky's the limit.

293:

"Blower" was also a specialised dedicated telephone circuit(s) used for placing & reciving bets, usually on horse-racing, from about the mid-1930's until the legalisation of off-course betting in the 1960's

294:

If OGH is referring to the technique that I think he is, it injects that logic into the code that describes the complete component, either in the source code itself or during the translation of that into the control logic for building a mask. It doesn't have anything to do with repurposing transistors.

The original example arose because someone claimed that source code viruses could not exist. Someone else said "Yeah?" and wrote one. That was in the 1960s, if I recall. Since then, there has been a fair amount of work on them and, while they are trickier to create, they can do anything an object code one can.

295:

@214: The "Two Georges", by Dreyfuss and Turtledove, is an alternate history in which Britain and America made peace in 1776. I'm sure it's not the only such book.

Adam Smith, writing in 1776, said that the war was wrong from an economic point of view. His point was that if Britain lost, the Americans would still be gloating about it in 2019, and if they won, America would be so poor that it would take 40 years before Britain could collect the taxes they would have collected in 1775.

296:

Hmmm. Let's see for my house. I'm both ahead and behind the curve.

4 TVs (but not connected) a printer, 6 or more streaming things, 10 or so remote sensing things like motion detectors up through door bell cams and security flood light cams, only 1 printer (not an HP with their every printer is a WiFi hotspot), watches, phones, ipads, 5 computer like things, router, WiFi APs, switches (some smarter than others), and so on. I still can't figure out what to do with a smart fridge. A smart oven, yes. A smart fridge nope. Unless it tells me when it can't do it's job of keeping my food cold.

297:

Ah, yes, the language that uses whitespace as a syntax element....

298:

Sorry, but I disagree.

1. The Japanese High Command had been trying for, I believe, 10 days, to arrange a surrender.
2. They did not have to hit a city, rather than a target like a naval base.
3. Read fucking Hiroshima, by Hershey.

299:

"Doing it well".

Yeah, I can see writing without using pronouns. Isn't that called writing in passive voice, and isn't that discouraged, except for journal articles?

300:

Thanks for the definition. When I hear the word "blower", at least for the last 10-15 years, what I see in my mind is someone holding a usually gas-powered, noisy machine on their backs, and blowing leaves, etc, either into a pile, or into a neighbor's yard.

And, of course, at least half the time, it's done while you're still in bed, waking up.

301:

You can experience something, and that can be one *kind* of knowledge, very different from the other kind, that can be taught and measured.

For example, unless you've been there, I don't think any of you can truly *understand* how the death of my late wife affected me, and nothing I say can get that understanding into your head in a way that you *really* understand it.

On the other hand, teaching someone else is a wonderful way to understand what you know or have learned more deeply.

302:

What forced the Japanese empire to the negotiating table in 1945 wasn't the atom bombs — it was the Soviet offensives in Manchuria, that kicked off on August 9th and steamrollered the Japanese army right off the mainland. (Japan had occupied Korea for decades: the current DMZ marks the point where the Soviet tanks ran out of fuel and had to stop after driving all the way south through China.)

After Midway it was clear that the naval war was lost, and from August 9th onwards it became clear that the land war was going to be lost too.

This left the Japanese government facing an unpalatable choice: whether to surrender to Stalin, or to surrender to the USA. Or to put up a fight to the death and suffer invasion -- which they'd seen at a distance in Okinawa, where roughly 50% of the civilian population died.

The atom bombs were a useful pretext for surrender, but probably not the actual cause. On the other hand, they were a very useful warning to Stalin -- that after the Axis powers went down, going up against the United States was a really bad idea.

Realpolitik sucks, especially when it's carried out using hundreds of thousands of civilian deaths solely to make a point. But it's fairly clear that the final conflict could have been even worse.

303:

Austro-Hungarian Navy

That term has always grated when I hear or read it. Even when it first entered my realm when I saw the Sound of Music as a youth.

No oceans so what navy? Yes I know it was based on the extent of the empire but still.

304:

everyone saw the utility of aviation (starting with airships, then fixed-wing and floatplanes) for fleet observation right from the start. The USN first experimented with launching aircraft from a ship in 1910

We just had to wait for the battleship admirals to generation out. Dec 7, 1941 sped the process up a bit.

Look up USN torpedo performance and why of it at the start of WWII to look at how ossified the USN had become in the 20s/30s.

305:

given the USA of the time is definitely the more liberal democracy v the UK and Commonwealth of the time

It's debatable, and both native Americans and coloured people might disagree.

Agreed. Two full terms of Wilson sans stroke might have led to interesting things if he could establish the direction of things during his last year. He was a hard core racist (polite but hard core).

306:

how to guarantee the PLAN don't go off the reservation and do something stupid with them

Well it helps in this specific area that the PLAN owes its loyalty to the party and not to China. Which is hard for us westerners to wrap our head around at times.

307:

Charlie
Erm, no.
The USSR's offensive was very presuasive, but not as persuasive as instant sunshine on your own territory.

David L
The Austro-Hungarian Navy was a real threat iw WWI.
They had some capable Post-Dreadnaught battleships, one of which was deliberately sunk, at the last minute, or even later, by the Italians ( When there was no actual threat ) to make sure that the new state of Yugoslavia didn't get them ....

David L
Yes, well, I don't think the USA had any "non-pink" officers in their armed forces by 1918, whereas Britain certainly had had some by then. Look up Walter Tull, for a start.

308:

My reports met prompt pushback, with semi-polite comments about how they were concentrating only on Windows issues.

Was at a conference at Penn State earlier this decade. One of the folks involved in a Mac support group was complaining/yelling at the internal systems folks about how hard it was for students to use Macs to schedule classes, and almost anything else with their being a student there. He was told they didn't support Macs. He told them that 60% of the incoming freshman class was Mac only. This was just after the academic year was over and he hadn't gotten a clear response at that time about what would be done for the next year.

309:

The twisted part of this whole equation is that the bluff only works if no one in the chain of command is bluffing. Except perhaps the guy who can set it all in motion.

And no people we think of as insane get a hold of a few. Especially if "end of the world as we know it" doesn't bother them.

310:

But in general, there are a lot of things I know but can't explain.

There IS the factor of how much back story do you need to go into before you can get to the topic at hand.

311:

The Japanese High Command had been trying for, I believe, 10 days, to arrange a surrender.

Nope. Fantasy. Some were trying but the PTB were determined to fight on.

312:

Realpolitik sucks, especially when it's carried out using hundreds of thousands of civilian deaths solely to make a point. But it's fairly clear that the final conflict could have been even worse.

I wonder if Stalin might have gone further west at some point prior to 1950 without seeing the bomb effects on Japan.

313:

What forced the Japanese empire to the negotiating table in 1945 wasn't the atom bombs — it was the Soviet offensives in Manchuria, that kicked off on August 9th and steamrollered the Japanese army right off the mainland. (Japan had occupied Korea for decades: the current DMZ marks the point where the Soviet tanks ran out of fuel and had to stop after driving all the way south through China.)

Richard Frank's Downfall: The End of the Imperial Japanese Empire is based on the unsealed archives of the former Japanese empire. We know what happened. Long story short: Yes, it was the nukes. No, it wasn't Stalin. Yes, it is documented, and yes, it is worth the read.

Longer story: the problem the Japanese government faced was mid-level officers who strongly believed in the bullshido ideology of the regime (e.g. no surrender). Any cabinet member who talked about surrendering risked assassination. This happened right up until the Emperor surrendered, at which point a bunch of officers tried to stage a coup and overthrow him. Fortunately, imperial loyalists were ready for it, the coup went nowhere, and the surrender stood.

The reason? It's not simple. The nukes weren't the major cause of death on Japanese soil. That would be the firebombing campaign (killed about 4x more Japanese than the nukes) and the bombing campaign to cut all interisland bridges and major railroads, and the naval campaign to mine all the harbors. The Japanese were facing starvation and invasion, and they did not surrender.

The imperial plan for defending the homeland against invasion was simply to cause as much bloodshed as possible on both sides. They reasoned that Americans were "soft," in that we weren't fond of mass murder, nor of mass casualties from kamikaze actions and banzai civilian charges. They believed that a horrendously bloody invasion would force the parties to the negotiation table, and the result of negotiations would be that the Emperor would keep his throne. It might have worked, too. Polling in the US suggested strengthening opposition to an extremely bloody invasion of Japan. Comparison afterwards of the Allied invasion Plan ("Downfall") and the Japanese defense plan revealed effectively no surprises. Both sides had exactly the same model of where the invasions would have started and how they progressed, and both sides predicted horrendous casualties that, had they happened, would have about doubled US WW2 casualties, with millions more Japanese civilians dead in the carnage.

The nukes changed the equation. First off, the Japanese had done nuclear research of their own, so they knew within a day of Hiroshima what had been done to them. That told them that the US had a technological advantage they hadn't counted on. Nagasaki told them it wasn't an accident, and extensive dropping of leaflets told the Imperial government and the Japanese public which cities were going to be eliminated next, and in what order. And there was apparently nothing they could do to stop it. (The leaflets turned out later to have been a bluff, but it worked)

At that point, the Japanese strategy for defending the homeland collapsed, as the Americans could apparently cause all the casualties they wanted without invading, and they could kill the emperor if they desired (IIRC he was the last on the list of nuke targets, although I might be misremembering). That's when the Emperor surrendered.

This all came out of their records.

What the Soviet invasion did was to save the Japanese Imperial family. The US took over Japan in the chaos following the surrender, and there was some serious push to depose the Emperor and create a democracy. However, there were also a lot of Japanese communists active in the streets, for pretty obvious reasons. Gen. Douglas Macarthur was in charge of Japan, and to counter the budding communist insurgency, he rehabilitated the Imperial family as figureheads of a peaceful government and buried the war crimes they had undoubtedly been party to.

314:

It wasn't a matter of old folks being old, the planes of the 20s could not carry enough boom far enough to decisively defeat a fleet of post-dreadnought battleships. Yes, they could do a lot of increasing amounts of damage as the years went by and engines delivered more performance, but it wasn't until the eve of WW2 that airframes with enough payload started to get fielded.

If you look up the evolution of war plan orange, you can watch the changing threat environment dictate changes to overall strategy.

315:

Let me correct that. The actually rulers at that point were hinting at an armistice where the army and current command structure would be intact and everyone would just stop fighting. The US wasn't interested as the leaders left at that point would be the ones who started the mess in the first place.

316:

Austria-Hungary included a large chunk of the Adriatic coast from Trieste in the north to the border with Montenegro in the south, so that ocean with Italy just the other side.

317:

I know. I know. But when you're looking at maps from WWII onward the term seems odd.

318:

Thanks for posting that. I was in the middle of figuring out what sources would be required to address it. For those looking for an easier-to-digest source, I would recommend either the nuke section of the AskHistorians FAQ or the nuclearsecrecy blog.

319:

The leaflets turned out later to have been a bluff, but it worked

Someone on this blog a few years back (5 or more?) posted a link to US reports that told the Pacific command that starting in September/October 3 bombs a month could be delivered.

I wish I had saved that link.

320:

From what I've read the guys in the field (well open water) with the power were mostly battleship guys. Dec 7 allowed their air power thinking peers to mostly take charge.

321:

This covers it, and I've seen the same figures elsewhere several times.

(From the site Rabidchaos linked above.)

322:

Passive is one way of doing it, and as EC says its prominence can become problematical, but it's not the only technique available. There are forms of sentences where it would be unnatural or ambiguous to use a pronoun, so the pronounless form is what you naturally expect, and by planning ahead a bit you can arrange things to fall so that using those forms of sentences is itself also something you'd naturally expect.

Those gas powered leaf blower things are a prime demonstration of the Sisyphean futility of fighting entropy. As fast as their users order the leaves the wind comes along and randomises them again. Ten steps forward and nine steps back, if that, and anyway what is the bloody point of even trying in the first place? I think about the only people who do manage to put them to a useful purpose are those who use them for starting home made gas turbines.

323:

"inject an entire 486 or Pentium class microprocessor and its own boot ROM into the dopant layers of the Management Engine of a next-generation chipset."
You don't need anything like that much complexity; remember, the first ARMs we built used 25,000 transistors and would be entirely capable of doing the job. That would be such a small area of a modern big cpu that it could hide under the dot over the 'i' in the intel logo etched into a spare corner.

324:

Not the Thompson attack, the hardware attack in the first link in OGH's post. The Thompson attack is just his proposed vector for getting the hardware attack in there. The hardware attack allows you to make changes to the circuitry on the chip which are invisible to the usual imaging techniques. Charlie wants to make those hidden changes add up to a whole concealed Pentium sitting in there alongside the fully-functioning real processor, undetectable by electron microscopy but able to wake up and start acting as a sort of hypervisor under the attacker's control. I reckon the limitations of the hardware attack aren't going to let you do that, but it probably is feasible to hide a set of flaws such that you can trigger an HCF by getting the processor to load a specific bit pattern, and if you were very lucky you might be able to leave the fucked state with a little bit of processing ability, probably sub-microcontroller level but still with full access to the current memory contents etc.

325:

Some polities' armed forces are royal.
Some polities' armed forces are Imperial.
But we're Austria-Hungary, and ours are Imperial and Royal. Take that, suckers.

326:

Conventional bombing would have done for the Japanese cities and towns regardless of the nukes being available. Air defences over Japan were pitiful and getting worse as they ran out of fuel for interceptor aircraft and their anti-aircraft guns were poorly used and not effective especially against bombers flying at 30,000 feet. An interesting factlet, Boeing's Seattle plant built and delivered 300 B-29 Superfortresses in September 1945, that's after the Japanese surrendered.

I remember reading an SF story where the nukes didn't work so the US and Allies just kept bombing Japan with conventional iron bombs and never invaded. They were still doing it in 1955, relying on bake sales and collections from patriotic groups to pay for the missions. There wasn't anything left to bomb really but it had become a tradition of sorts.

327:

The Japanese Navy had been training to attack capital ships using aircraft since the early 1930s using dive-bombers and torpedo-bombers launched from aircraft carriers so it wasn't on the "eve of WWII" that such aircraft became available.

The real problem was range -- the primary concept of using aircraft against fleet-strength battlegroups in the US was to employ land-based long-range heavy bombers against them hence the Midway island land-grab that sparked the Pearl Harbor raid. The Japanese opted for lots of short-range two-man bombers and torpedo-bombers carried on aircraft carriers and that turned out to be the better choice. During the Battle of Midway in June 1942 the US actually flew B-17 "Flying Fortress" bombers from Midway island against the Japanese carrier fleet, dropping bombs on them from 20,000 feet up and scored exactly zero hits, not surprisingly.

328:

No, OGH was right.

The Japanese warlords, who actually ruled the country with Hirohito as a puppet, knew that Stalin would smash Japan utterly, and had been trying to negotiate a surrender to the USA for over a month. But they were insisting on being (effectively) left in power, with some of their fleet, and everybody knew they would start another war in a decade or so (now with atomic bombs). Failing that, they were prepared to fight to the last man - no, it wasn't just a negotiating tactic. The status of the Emperor was an irrelevance.

Quite rightly, the USA was demanding unconditional surrender, to humiliate the warlords, and what the atomic bombs did was to shake up the warlords enough to enable Hirohito to seize control, bypass them, and surrender. It really DID save all those lives, both compared to a conventional invasion and compared to a negotiated surrender.

329:

Someone on this blog a few years back (5 or more?) posted a link to US reports that told the Pacific command that starting in September/October 3 bombs a month could be delivered.

https://nsarchive2.gwu.edu/NSAEBB/NSAEBB162/45.pdf

Memorandum from Major General L. R. Groves to Chief of Staff, July 30, 1945


4. The final components of the first gun type bomb have arrived at Tinian, those of the first implosion type should leave San Francisco by airplane early on 30 July. I see no reason to change our previous readiness predictions on the first three bombs. In September, we should have three or four bombs. One of these will be made from 235 material and will have a smaller effectiveness, about two-thirds that of the test type, but by November, we should be able to bring this up to full power. There should be either four or three bombs in October, one of the lesser size. In November, there should be at least five bombs and the rate will rise to seven in December and increase decidedly in early 1946. By some time in November, we should have the effectiveness of the 235 implosion type bomb equal to that of the tested plutonium implosion type.

330:
so the US and Allies just kept bombing Japan with conventional iron bombs and never invaded. They were still doing it in 1955,

A conventional war that continued for 14 years (1941-1955)?!?

That is more fantasy than SF. The US would never get involved in a conventional war that lasted that long. They'd make peace somehow, don't you think? /snark

331:

The basis of the story was, IIRC that the atomic bombs didn't work. An invasion was attempted but it was very bloody on both sides and the Allies eventually stopped advancing and just bombed everything that looked like civilisation from the air -- cities, towns, ports, railways, reservoirs, roads, houses, fields etc. while maintaining a naval blockade to prevent supplies being shipped in from outside.

By 1955 in-story the bombing is desultory since there's nothing identifiable left to bomb hence the bake-sale funding of private bombing missions, often paid for in revenge for the loss of a family member to the Japanese a decade earlier. An odd little story.

332:

The Japanese attempts at a negotiated surrender were via the Russians which was rather pointless, it turned out. Stalin had agreed at the Yalta conference to enter the war against Japan three months after victory in Europe. He put Marshal Vasilevski, the greatest general in history in charge of the preparations to attack the million-strong Manchuko army in Manchuria and that attack was launched on August 8th, exactly three months after VE-day as promised, two days after the Hiroshima bomb was dropped and a day before Nagasaki was bombed.

This attack concentrated the minds of the War cabinet somewhat. Their chances of a negotiated surrender mediated by the Russians had gone out the window, they were losing their last large military force outside Japan proper and they faced being attacked from all sides. They were already losing cities to conventional bombing which continued after August 9th but the firestorming of Tokyo, Yokohama, Osaka and elsewhere previously hadn't dented their resolve to carry on fighting.

There's an anime movie by Makoto Shinkai, "The Place Promised To Us In Our Early Days" which is set in a Japan which was occupied by the Americans and Russians and split as Germany was after the war with the Russians holding the North, having invaded via Sakhalin island through Hokkaido and northern Honshu.

333:

No. There may have been such negotiations, but my sources indicated that Japan had approached the USA via Sweden, and been told "We will accept only unconditional surrender." As I said, for very good reasons. This page refers to it, but my sources were by people who were involved (though not principles).

https://www.quora.com/Did-the-Japanese-government-offer-to-surrender-before-an-atomic-bomb-was-dropped-on-them-in-WWII?share=1#

334:

"It wasn't a matter of old folks being old, the planes of the 20s could not carry enough boom far enough to decisively defeat a fleet of post-dreadnought battleships. Yes, they could do a lot of increasing amounts of damage as the years went by and engines delivered more performance, but it wasn't until the eve of WW2 that airframes with enough payload started to get fielded."

I read a great book on the interwar US Navy, which demolished a bunch of myths. The short story is that they, like all of the Great Power navies, were really interested in aircraft, and were pushing hard to use them.

However, a ship-launched aircraft could as best drop the equivalent of one shell from a cruiser's guns (far smaller than a battleship's main battery). And that could be done if and only if the weather was good, and good for a couple of hours (otherwise the aircraft would be lost).

335:

"Those gas powered leaf blower things are a prime demonstration of the Sisyphean futility of fighting entropy. As fast as their users order the leaves the wind comes along and randomises them again. "

The first rule of leaf collection is obey the wind (the second is to collect them while they are dry).


I agree that if one uses them in a very foolish manner, they don't work. What you do is to move them in the direction of the wind. My technique on a windy day was to use the plume of air down low, so that the leaves were lifted, and would then travel down wind. On a good day I could them to travel up to 20' with each pass.

This is experience gained from dealing with a one-acre lot in Michigan, which had well over a dozen maple trees, and a 150' tall cottonwood next door (up wind).

336:

"300 odd warheads (not missiles, warheads) simply isn’t enough to protect against such and it is bound to give people ideas. If a US first strike accounted for 80% that leaves 60 warheads for the anti ballistic missile systems. That seems within the realm of winnable. "

Anti *ballistic missile* system?

How about 10-20 cargo ships each carrying several nuclear-armed cruise missiles?

337:

Cruise missiles don’t require abm systems they can be shot down with conventional fighters or ships. Naval fleet doctrine is still set up to deal with attacks of hundreds of bomber launches cruise missiles that was one of the Soviets big tricks

You’d probably be better off just sailing the ships all the way to port and detonating them there. Except for whatever countermeasures the US has in place to detect such, not exactly something they haven’t been working on for at least 20 years

The reality is China’s nuclear deterrent is kind of amazingly crappy for a nation with such aspirations. I’m sure it won’t stay that way but that’s the case today. They really don’t want to get in a shooting war with the US especially not the current set of wackjobs who if given the chance would probably would go for all genocide not just decapatation

338:

Firstly: China has the debt bomb — they could crater the US dollar more or less at will (at risk of heavy damage to their own economy).

Secondly: How many USAF and ANG fighter bases operate QRA around the clock on US soil, and how many fighters can they scramble within 60 minutes around the coastline? Here's a hint: the equivalent figure for the UK is four fighters on 24x7 QRA for the entire country, with another four available within an hour, and we're set up to expect regular visits to our airspace by Tu-95s and Tu-160s. I suspect the US air defenses, which got beefed up a bit after 9/11, are mainly focused on intercepting hijacked airliners -- a mass attack by cruise missiles would swamp them, and although a bunch of missiles would be shot down, better than 50% would make it to their targets (especially if it's a volley attack without prior warning and they field upwards of 50 -- preferably upwards of 100 -- weapons, exceeding the AIM load-out of the available QRA fighters).

A lot of Cold War deterrence calculations are based on hypothetical worst-case scenarios, whereas for an attacker with nukes ... "you need to get lucky every time: we only need to get lucky once".

339:

Heteromeles @ 245: The basic advantage of democracy is that it allows a peaceful transition of power. Authoritarian regimes, whether aristocratic, plutocratic, oligarchic, or autocratic, tend to have violent power transitions.

In other words, in a democracy, a president can be impeached without fearing that his spouse, children, and relatives will be put up against the wall in front of a firing squad. With a dictator, said firing squad is a common way to end things. Similarly, faction fights in aristocratic regimes tend to be bloody.

Today's super-rich and would-be president-kings seem to have forgotten this. I'm not sure they want to live in a world where, for the US, it's a rerun of the civil war every time someone gets too weak. They think they do, but most of the people living with that fantasy have only played with guns. They've never been under fire in return.

Otherwise, any system can be corrupted, and authoritarian systems seem to get corrupted even faster than democracies do. Spreading out the power seems to slow corruption, and that's the best argument for a liberal system that I know at this point.

That still leaves my question unanswered.

What do you do in a democracy when the wannabe Authoritarian's weasel their way into power and refuse to give up their hold on it?

How do you vote them out of office when they rig the election so that only their supporters are allowed to vote and only those votes toward them retaining power are counted? And even if somehow in spite of all that they lose the election, they refuse to recognize the validity of the outcome?

340:

@340:

It is not a democracy if they can rig the election in the first place.

341:

paws4thot @ 286: OTOH I always took 'blower' to be a slang term for telephone.

Perhaps it's a term originating in one technology adapted to describe a different technology with similar function?

342:

It’s not worse case scenarios it’s generally a scenario that massively favors the attacker. Which is again why both the US and USSR maintained such ridiculous weapons surpluses

If the US was premeditating a first strike in China, say after a cyber attack, then you wouldn’t be scrambling normal air defense you would be waiting with everything you have to soak up whatever retaliation China managed to get off after you annihilate then with stealth cruise missiles or space based icbm’s or whatever crazy toys all those trillions of dollars of defense spending has bought since the 80’s

Air power can be redeployed pretty fast so you would not have to worry about giving the enemy too much of a heads up

The container ships packed full of cruise missiles is likely a fantasy, what you need to worry about are land based icbms that survive the first strike plus the very few sub launched slbms. And anything unexpected you didn’t plan on

A lot would also depend on the relative positions of the various fleet elements as well, they redeploy slower however the carrier based air assets can be rebased if needed

This is all assuming a cyber attack doesn’t gut the entire military for all time of course. If that happens, it’s game over since even the relatively weak chinese nuclear arsenal could finish the job. In fact they’d have to, or else risk the US going for them once they recover from the cyberattack

My meta point is serious nation vs nation attacks in a military environment that heavily favors the attacker escalate very quickly because there is a huge military advantage that goes to whomever escalated them first. eventually they very likely end in nuclear war. Which is why no one does them . Nation vs Nation stuff is likely to stay low intensity, political and economic for this reason

343:

whitroth @ 299: Sorry, but I disagree.

1. The Japanese High Command had been trying for, I believe, 10 days, to arrange a surrender..

The Japanese High Command had been seeking an armistice; a "cease fire" that would leave them in charge and their military intact. Moreover, their proposed armistice would have been ONLY with the United States and Great Britain, leaving them free to continue their genocide in China, Korea and south-east Asia ... and to resume their war against the Allies at some later more advantageous date.

2. They did not have to hit a city, rather than a target like a naval base.

Hiroshima was an industrial and military target. Hiroshima was a supply and logistics base for the Japanese military; a communications center; a key port for shipping, and an assembly area for troops. Field Marshal Shunroku Hata's Second General Army, which commanded the defense of all of southern Japan, located in Hiroshima Castle. The headquarters of the 59th Army, the 5th Division and the 224th Division, along with five batteries of 7-cm and 8-cm (2.8 and 3.1 inch) anti-aircraft guns of the 3rd Anti-Aircraft Division, including units from the 121st and 122nd Anti-Aircraft Regiments and the 22nd and 45th Separate Anti-Aircraft Battalions for a total of 40,000 Japanese military personnel stationed in the city. It was also a center war industry, manufacturing parts for planes and boats, for bombs, rifles, and handguns.

Nagasaki was one of the largest seaports in southern Japan, and had wide-ranging industrial activity producing ordnance, ships, military equipment, and other war materials. The four largest companies in the city were Mitsubishi Shipyards, Electrical Shipyards, Arms Plant, and Steel and Arms Works, which employed about 90% of the city's labor force, and accounted for 90% of the city's industry.

Don't blame the United States because Japan put their military establishments & war industries within their population centers.

3. Read fucking Hiroshima, by Hershey.

Got it on the bookshelf right next to my copy of Iris Chang's "The Rape of Nanking"

344:

The other thing to keep in mind is after all the conventional air campaigns, the idea of avoiding civilian casualties was any kind of goal had pretty much left the military zeitgeist

345:

One suggestion that got at least as far as being made was not to nuke anything on land at all, just set one off offshore somewhere with a good audience and say "this is what you'll get if you don't surrender". It didn't get anywhere because nearly everyone thought it would just be a waste of a nuke, and I see no reason to disagree.

The distinction between "civilian" and "military" targets in Japan had pretty much gone out the window anyway because so much of their manufacturing was based on people making parts at home. Instead of having the European separation between factories that were difficult to hit and residential districts that it was naughty to hit, they combined the two functions into one great big unmissable and highly flammable target. Hiroshima and Nagasaki were simply two out of five such targets that the nuke guys had to beg the conventional guys to hold off setting fire to for a bit so they could see what a nuke did on its own to an intact target.

346:

It is not a democracy if they can rig the election in the first place.
It is a Democracy until they do this, e.g. by gaining total control over all parts of the government including the courts, then changing the laws.
To get to that point, they have to hack the voters first, if the election apparatus is sufficiently secure. Relentless lies and influence operations, a supine or friendly press, etc.
Recent UK GE an example, to be followed by voter suppression and redistricting. (Has any of these happened yet? If not, they will. A minoritarian government needs these to maintain control.) Brexit showed that the techniques worked on UK voters.
These were heavily done in the US at the state level (2010 was a turning point), starting with a strategy of gaining control over state governments, then computer-assisted gerrymandering of district boundaries for the advantage of the ruling party (almost always Republicans, a few exceptions), along with voter suppression laws to differentially suppress the votes the opposition party (almost always Republicans; don't know of any exceptions actually).

---
Comsec/opsec fail. NYTimes piece, but the tweet has a short video.

New: Today's piece from @stuartathompson and me is about the national security risks in smartphone tracking. No one is exempt. Not even @realDonaldTrump. It took minutes to find a phone that we believe belonged to a Secret Service agent traveling w/ Trump. https://t.co/NaT0hAtYdO pic.twitter.com/Bq1BtzYw7K

— Charlie Warzel (@cwarzel) December 20, 2019

347:

"The reality is China's nuclear deterrent is kind of amazingly crappy for a nation with such aspirations."

They're nukes. You don't actually need thousands and thousands of them to ruin someone's day. And if you're after swamping someone's defences then it's kind of a waste to put one in every missile when you expect most of them to get shot down; better to put the actual nukes in say one missile in ten and just lumps of concrete that weigh the same in the other nine, and spend the money you save on nukes on building a lot more missiles.

348:

By doing the hard work of democracy.

I mean, seriously. If you think Democrats are pure little snowflakes who can't deal with corruption or authoritarianism, you really haven't dealt with democratic politics. I'm not being snarky when I say that the advantage to democracy is that it keeps political fights mostly nonviolent.

Go read Blueprint for Revolution by Srdja Popovic if you want to see how a bunch of activists broke a dictatorship and instituted a democracy. Just because we haven't done it yet in the US doesn't mean either that it can't be done (it can) nor does it mean that the knowledge base doesn't exist, both here and elsewhere in the world (it does).

349:

Pretty much. It got worse, because not only did the integrate military factories into civilian neighborhoods, so that workers were close to plants, and both were housed in highly flammable wood buildings, they also outsources some of the parts manufacturing to peoples' homes around the plants. The US noticed this when they did photo reconnaissance after fire bomb raids, and found the remnants of burned factory equipment in places that, before the raid, had been peoples' homes.

I don't think this was a system deliberately designed to provoke atrocities for PR value. From what little I know, this seemed to have been "Texas Style" urban planning, wherein they saw less value in segregating people and their work, and rather more value in having people live and work in close proximity (Texas is notorious in the US for allowing highly divergent land uses, like, oh, chemical factories and low income neighborhoods, to exist as neighbors).

Anyway, this was why the fire bomb raids by the US caused more casualties than did the nukes. There was literally no way that they could bomb industry and not hit civilian homes, and once they realized this, they stopped trying to differentiate.

350:

In the same vein, it's not unusual for a Japanese model kit to contain several small packets of jewelers' screws, small runners of amber and/or red lighting parts, rubber tyres, polyurethane bushes..., all of them individually hand-packaged by people such as sanitation workers doing a second job...

351:

Well in order to swamp with thousands of fake nukes you’d also need thousands of fake icbm’s which the Chinese also don’t seem to have

Their attitude toward deterrence is very odd and I think harkens back to a time before they were trying to do the regional power thing and only needed to make sure India wasn’t going to get frisky

352:

Pretty much any industrialised country pre-WWII had lots of workers and their families living close to large employers like ports, shipyards, factories, rail establishments etc. Remember at that time the working day was ten hours, typically so adding a couple of hours commuting on a bicycle to work wasn't going to happen, instead the dockyard gates were half a mile from the front door.

The East End of London got blitzed by the Germans, not because it was a densely populated area but because it surrounded the Port of London and supplied it with its manual labour force. I lived at one time in a residential area in Southampton -- the pub across the road had been built on a bombsite after the Germans missed the Supermarine Spitfire factory down at the docks one day in the early 40s. Etc. Etc.

Separating factories and other industrial facilities dependent on a lot of manual labour from their workforce was pretty much impossible in those times.

353:

In that sense, yes, but there was still clear demarcation: this is a factory, where war stuff gets made; this is a house, where people live. You can draw lines round them on a map and unambiguously classify areas as containing one or the other. At least the concept of trying to hit the factory and not the houses made sense, even if the practicalities were such that hitting the town at all and not the woods five miles away was more luck than anything.

The Japanese system did not have that clear demarcation. They put machine tools in people's houses and people made war stuff at home without needing to go to the factory. The "military" and "civilian" targets were the same buildings, so hitting one and not the other was flat out impossible, and once the US realised what the setup was they stopped worrying about it.

354:

"I don't think this was a system deliberately designed to provoke atrocities for PR value."

No, nor do I. I think they just had the bad luck that some important things on their standard list of good ideas turned out to be very bad ideas in circumstances that had never happened before, made worse by failing to anticipate what kind of circumstances were likely and not installing useful anti-aircraft defences.

I think the dispersed manufacturing thing was something they did a bit of in normal times anyway, and massively expanded it in wartime because that was a very effective way of getting the necessary boost in production with the minimum of hassle. It's also more resilient as long as your enemy still thinks hitting the factory is what it's all about.

355:

(the Chinese) attitude toward deterrence is very odd

I think it may be more that they were as worried about the USSR as they were about the USA.

Certainly a lot of their air defenses were set against an attack from the north, and some of the civil defense procedures assumed soviet-launched short/medium-ranged missiles.

This is based on conversations with a retired PLAAF colonel, so applies more to 20th century policies/priorities. (And of course nothing classified etc.)

356:

At least the concept of trying to hit the factory and not the houses made sense, even if the practicalities were such that hitting the town at all and not the woods five miles away was more luck than anything.

Consider Cherwell's paper:
https://en.wikipedia.org/wiki/Dehousing#Production_and_contents_of_the_dehousing_paper

Given the known limits of the RAF in locating targets in Germany and providing the planned resources were made available to the RAF, destroying about thirty percent of the housing stock of Germany's fifty-eight largest towns was the most effective use of the aircraft of RAF Bomber Command, because it would break the spirit of the Germans. After a heated debate by the government's military and scientific advisers, the Cabinet chose the strategic bombing campaign over the other options available to them.

The RAF was trying, as a matter of policy, to hit people's houses. In 1942.

357:

It's the reason Niven & Pournelle wrote Bomber Harris into "Inferno".

358:

The machine tools and fabrication units weren't exactly in the home i.e. not in the kitchen or bedroom, they were in a shed in the garden or in an annex. It's still a common thing in Japan to this day for light engineering combined with small (20-50 square metre) dedicated engineering shops and the like interspersed with housing developments and shops.

359:

The key part of that paper was "Given the known limits of the RAF in locating targets in Germany and providing the planned resources were made available to the RAF,"

95% of RAF Bomber Command couldn't land their bombload within a mile of a specific target, especially at night. Either they bombed something more than a mile across or it wasn't worth bombing anything, pretty much. That would mean leaving the Germans alone mostly, with perhaps occasional pinpoint bombing of factories by experts like the Pathfinders and the specialists flying low-level Mosquitos and the like on risky daylight missions.

The attitude was that this was a total war and letting the German population which supported their leadership escape unscathed because they weren't in uniform wasn't going to fly. The RAF were getting lots of bombers (the planned resources) and given the limitations of high-level bombing especially at night (the known limits) then German cities were going to get bombed.

360:

better to put the actual nukes in say one missile in ten and just lumps of concrete that weigh the same in the other nine, and spend the money you save on nukes on building a lot more missiles.

You got the economics exactly back-asswards: nukes -- assuming you've got an enrichment cascade running, or a reprocessing plant for a reactor breeding 239Pu -- are cheaper than ICBMs, and quite possibly cheaper than cruise missiles. There's a lump of exotic machined metal, some rather unusual high-speed electronic detonators, a few other strange ingredients (tritium -- see also "reactor breeding 239Pu"; some kind of aerogel in current-gen US warheads, presumably as a spacer/radiation channel), but the rest is boring 1950s explosives technology.

If you're the USA you manufacture fancy variable yield ultra-lightweight weapons using incredibly high purity pits and you design them to need remanufacturing every 2-3 years. Then you need to have an assembly line running back-to-back with a dis-assembly line tearing down the time-expired warheads and recycling/reprocessing the ingredients, and this whole plant is basically running 365 days a year just to keep your existing fleet of warheads in running order, which is where a chunk of the cost comes in. And another chunk of the cost is securing the stockpile (there's an entire federal agency, the NNSA, just for shipping nuclear weapons around the continental United States in booby-trapped armoured articulated lorries with escorts of heavily-armed Men In Black, and contingency plans for what to do if one is caught in a multi-vehicle pile-up caused by random happenstance). And so on, and so on.

The US government is so paranoid about losing just one gadget, or having just 1% of their fleet of gadgets fail to go "bang" on command, that they gold-plated the entire supply chain and run it according to some half-assed 1950s US military equivalent of ISO9000.

But this isn't necessarily the only (or even the right) way to do it.

361:

Their attitude toward deterrence is very odd and I think harkens back to a time before they were trying to do the regional power thing and only needed to make sure India wasn’t going to get frisky

Think of Japan.

Japan invaded and brutally occupied a big chunk of China from 1931-45; the Rape of Nanking was the world-headline stand-out for brutality back then, but not the only or even worst atrocity the Japanese Army committed.

And Japan is about this -><- close to deploying mature IRBMs with thermonuclear warheads capable of hitting Beijing and other population centres. It's not a matter of whether or not they could, but a matter of how long it would take -- probably single-digit months at most, because they've got the plutonium stockpile, suitable solid-fuel smallsat launchers, everything but the warhead design and the maneuvering bus (which could well be hacked out of existing satellite interstage designs). Japan, right now, has an ion drive deep space mission bringing back an asteroid sample for the second time: I think it's pretty clear that they could have the bomb (for first-rank values of "the bomb") if they ever wanted it, and in diplomacy, people tend to go by capabilities rather than intentions (intentions can turn on a dime, as we've seen since 2016).

362:

American zoning laws are weird, to the eyes of anyone who lives in a thousand year-old city, or an environment that relies on walking rather than automobiles for personal transport. Even here in the UK, where new factories tend to be built on the outskirts of cities (outside orbital ring roads for access), older warehouses and light industrial units are typically scattered throughout residential areas, so that you get things like auto repair shops, artisanal breweries, and food wholesale warehouses within the same block as apartment buildings, restaurants, and pubs.

(One of the things that makes central Edinburgh so pleasant to live in is that it's mostly commercial and retail at ground level and apartments above the shops, so that everything you need is within walking distance or a short bus ride away. And it's something that got lost when they built the outlying residential suburbs with zoning in mind, so that there are vast people warehouses in what are effectively amenity deserts: the designers assumed everybody would drive, but many of these suburbs are so dirt-poor that the residents can barely afford bus fare.)

363:

You got the economics exactly back-asswards: nukes -- assuming you've got an enrichment cascade running, or a reprocessing plant for a reactor breeding 239Pu -- are cheaper than ICBMs, and quite possibly cheaper than cruise missiles.

Some further thoughts on this: Per gram, fissionable materials are very expensive. But a bomb doesn't need all that many grams, at least not as many as some of us would like. And the reason why nukes are eye-wateringly expensive is because the refining process to create that fissionable material consumes vast amounts of money before the first gram is purified, on budgets that make chip fabs look reasonable. Once the production line is running the marginal cost of another kilo of instant sunshine mix is...still not cheap, but nothing that will worry any power that can afford a nuclear weapons program in the first place.

Space capable rockets, even in the 21st century, are not cheap. Contrary to Robert Heinlein's hopes, you can't just buy a bootleg rocket at the county fair...

364:

to Heteromeles @314:
Richard Frank's Downfall: The End of the Imperial Japanese Empire is based on the unsealed archives of the former Japanese empire. We know what happened. Long story short: Yes, it was the nukes. No, it wasn't Stalin. Yes, it is documented, and yes, it is worth the read.
This is a major problem with US historiography - when it comes to important things, facts and logic can take a vacation. Because if they do not agree with America, it is their problem, not America's. Pretty sure that could be the same case with USSR, but OTOH it is long in the past now.

Rabidchaos posted good links @319, they at least cover more or less balanced position.

So to correct the rest of the obvious problems here:
They reasoned that Americans were "soft," in that we weren't fond of mass murder, nor of mass casualties from kamikaze actions and banzai civilian charges.
Which was possibly quite underlined by firebombing campaign that is designed to put as much as possible effort into killing as much as possible civilian population and destruction of their infrastructure. Besides, this was a point for future market expansion - people without cities and production power would be forced to buy foreign goods for generations to come. Ahem: obviously because they were bloodthirsty fundamentalists who did not care about their own population and were ready to sacrifice as many of them as possible to stay in power. Whatever bad US did on Japanese soil, they at least destroyed this cohort thoroughly.

That told them that the US had a technological advantage they hadn't counted on.
I thought, that the intelligence should have told them that, because by the time they would have lost all of their ships to the constantly modernized US forces AND they probably knew already that their codes are cracked. If a truck hits you on the road it doesn't really matter that much if it has one or two trailers.

What the Soviet invasion did was to save the Japanese Imperial family. The US took over Japan in the chaos following the surrender, and there was some serious push to depose the Emperor and create a democracy.
This lopsided formulation seem to suggest that USSR willingly helped US to suppress possibility of something like communist coup in Japan? I doubt that. The push of USSR was to regain access to certain strategic areas in the region and negotiate it with US - not more and not less, because Stalin knew perfectly well that they can't get a hold on any territory with a significant Japanese population. What they were trying to avoid is Hirohito to surrender everything to US, because Empire still held ground in many other regions beside mainland - especially the ones in China. Same as with Germany, for one - the war progressed until capitulation because both US and USSR were looking for complete capitulation, however many casualties it would take - they learned the lessons of WWI.

365:

to JBS @344:

Separate Anti-Aircraft Battalions for a total of 40,000 Japanese military personnel stationed in the city. It was also a center war industry, manufacturing parts for planes and boats, for bombs, rifles, and handguns.
Unfortunately the heavy industries facilities seem to be mostly undamaged because the bomb was slammed into the valley where the city was. And IDK about the stationed troops but something tells me that only the HQ would be killed immediately.
https://www.theguardian.com/cities/2016/apr/18/story-of-cities-hiroshima-japan-nuclear-destruction#img-4

Nagasaki was one of the largest seaports in southern Japan, and had wide-ranging industrial activity producing ordnance, ships, military equipment, and other war materials.

Which would have done them more honor and excuses if they would actually hit that port and not the population center beside it.
https://blogs.forbes.com/jimclash/files/2018/08/IMG_8035.jpg

Most people on this planet agree that it was a terrible war crime and useless loss of life for the equivalent of PR stunt that would hail the ascension of US to the top of food chain with their absolute power of destruction.

366:

Reading up on the history of the atom bombings, it's apparent that the second raid was a near-clusterfuck: due to a balky fuel pump the bomber has a shorter-than-nominal range, and the primary target (Kokura) was obscured by smoke and cloud. Nagasaki was a secondary target, also blanketed by cloud, so they tried to bomb using radar alone (and made visual contact at the last moment). They missed the aim point, and had to make an emergency landing at an unprepared base in Okinawa -- one of their engines actually crapped out from fuel starvation on final approach. (More here.)

Frankly, if not for the political pressure to conduct a second raid so soon after the first, it would have been more sensible to abort the mission and try again a day later -- even if you consider the circumstances made it necessary to the war.

367:

Which would have done them more honor and excuses if they would actually hit that port and not the population center beside it.

You may be thinking of a different event than the Nagasaki bombing. From the mouth of the river to the hypocenter is only three kilometers (and about 2,400 meters northwest of the original target point); as already observed, bomb accuracy then was not what it is now. Bockscar was dropping on visual, through a break in cloud cover, so pinpoint accuracy was not expected. The Fat Man device came down in the industrial Urakami Valley, leaving the more densely populated city center sheltered by hills.

368:

to Charlie Stross @363:
I think it's pretty clear that they could have the bomb (for first-rank values of "the bomb") if they ever wanted it, and in diplomacy, people tend to go by capabilities rather than intentions (intentions can turn on a dime, as we've seen since 2016).
I would like to suggest that it is indeed not the case, at least in traditional sense of the word. The engineering is an art to some degree, and most of modern big engineering venues are more the works of tradition rather than pure research effort - if you want to create something new you only just have to use what is already available. So, if you want to create a bomb in Japan, you can do so... if you have access to all the related materials, and all the specialists you really need. Or, say, the rocket science. If you don't have specialists, or experience, or can't buy licensed technology, you are square out of luck. Imagine modern ventures like SpaceX - they all grew out of people moving around and assets changing hands around new business model to the point that there's corporation that could have potentially made itself international independent entity. Well, too bad it was weighted down and did not fly (you can't really call all those years of gaslighting and 3D renders as leverage for actual working transportation).

And the major problem is that everybody, literally every single competent person out there is on lookout for unaccounted fissionable materials, so the Iran situation was a pretty explosive one just because somebody suggested there could be a nuclear device in the middle - without proof of such. If some middle eastern country (wink wink) would actually expose they have an unaccounted nuclear devices, much less hidden from public for decades and actually not that safe from uncontrolled usage, the only viable solution of it would be to put their hands into the air, squint really hard and hope they won't get obliterated in the time it takes to secure the payload.

Couple of years ago somebody suggested that country like Ukraine needs a nuclear arsenal as "deterrent" - well, this country has access to the fission material, it used to have nuclear arsenal, and probably even has some people who know how a nuclear device looks like. Actually, I think I've only seen one or two photos of actual nuclear device components. Naturally, it was met as any other ridiculous statement of mad fascist government, unfortunately it has it's own implications. With destruction of recent nuclear agreement there are increasing amount of attempt to move the missiles even closer to their targets, preferably right on top of them so to not use any rockets at all. That is, there were suggestions to move the american bombs directly into the former USSR borders, however fxxking stupid it sounds from military perspective, but policy is policy.

Another words, even though current abolition of INF is aimed at China, it may actually have other consequences in Europe. Not right now, maybe not even at all, but one wrong move and what seems to be a mighty and venerable bunch of freedom fighters will be thoroughly destroyed, because one thing is to have some options for negotiation and future reconciliation, and the other one completely when you have mad, nuclear-toting rascals directly at your border. And I'm not really saying about the dirty bombs only - the US involvement in the region wouldn't be ignored too.

369:

to Charlie Stross @368:
They missed the aim point, and had to make an emergency landing at an unprepared base in Okinawa -- one of their engines actually crapped out from fuel starvation on final approach. (More here.)

Oh, ok, I think I have completely forgotten these details since last time I was reading about that. It is even obvious on the map of destruction that the blast wave trapped in the valley exited through the harbor gap and wrecked some of the port structure.

Still, I wanted mention that even though it comes up again and again that people are defending US actions in later stages of war, it is very much open to everyone that these actions were not dictated by military necessity, but rather from opportunism - to destroy the independent industry capacity and the will of the people in anticipation of post-war occupation and politics in the world. Just a cherry on top of the whole destruction - and, uh, it's the very profitable one at that.

370:

letting the German population which supported their leadership escape unscathed because they weren't in uniform wasn't going to fly

And there was if anything an even stronger anti-Japanese attitude in the US. I very strongly doubt that Japanese housing would have escaped bombs even if all production facilities were segregated as Pigeon suggested.

371:

Fat Man was armed in the air on the way to Kokura (it was a city supporting a major military arsenal and weapons construction complexes). There wasn't a procedure to disarm it, I understand, it was going to be dropped somewhere after it was armed. Disposing of it into the sea with the fuzes not set wasn't acceptable as it might be recovered by the Japanese. Flying back to an American base with an armed nuclear weapon on-board wasn't going to happen either.

The Nagasaki bombing did some damage to the port area -- I've been up the mountain on the west side of the river above the Mitsubishi shipbuilding facilities and seen scorched rock walls and damaged trees lining the road.

The bad news about nukes is that they are very effective at the centre of their explosion but those effects (heat, mainly accompanied by blast) decay away quite rapidly. A widespread firestorm from chemical-energy weapons is a lot more destructive even if it doesn't match the kiloton numbers in terms of explosive energy.

372:

A widespread firestorm from chemical-energy weapons is a lot more destructive even if it doesn't match the kiloton numbers in terms of explosive energy.

And for a real firestorm, nothing quite comes close to a multi-kilometre-diameter asteroid impactor in the opposite hemisphere. You don't get a mushroom cloud from something like the Chicxulub impactor: you get a "rooster tail" of debris all the way out to geosynchronous orbit (it's speculated that some of the rubble from the Chicxulub event may have landed on Titan). Then most of it rains down as gravel and dust -- gigatons of it -- in the opposite hemisphere on the other side of the world. At which point it gives up most of its gravitational potential energy in the form of heat, and the sky across an elipse a couple of thousand kilometres long heats up and glows radiatively with a black body temperature in the thousands of degrees, baking everything on the surface and bringing the surface waters to a rolling boil.

Now that, my friends, is how you conduct a first strike (on somebody else's planet).

373:

...although to really do it properly you collide two full-size planets, moving in opposite directions with the target planet half way in between.

374:

"So, if you want to create a bomb in Japan, you can do so... if you have access to all the related materials, and all the specialists you really need."

You say that like you think they don't? They have the nuclear materials, which is the hard bit. They have nuclear physicists. You can simulate your design by computer these days well enough to have confidence it'll go off without ever having to bend metal. You can spend as much time as you like making sure the design's right, then when it comes to making the real thing that's a matter of precision engineering and metallurgy, which the Japanese are really good at.

375:

Naah, for my money it's hard to top Greg Bear's description in "The Forge of God". (Alien death robots take two small moons of Saturn, convert them into lumps of neutronium and anti-neutronium about the size of a grapefruit, and fire them into the Earth slow enough that they spiral in, converging as their orbits decay -- the Earth's core approximates to a hard vacuum, as far as lumps of neutronium are concerned -- then mutually annihilate. Meanwhile, more alien death robots have seeded the undersea subduction fault lines in the Earth's oceans with gigaton-range direct-fusion bombs to rupture the plate tectonics just before the shockwave from the emerging core reaches the surface and blasts the lithosphere right into orbit around the Earth's centre of mass. Description of the end of the world narrated by informed onlookers at the surface, waiting to die ...)

376:

I have a (probably) false hope that some elements of the Secret Service or possibly the US military have a quiet plan to take 45 'off the board' if he goes full bunker and decides to destroy the world.

377:

They don't have sufficiently pure Pu-239 in stock and they've shut down the Monju breeder reactor in preparation for decommissioning so they don't have a breeder that can make pure Pu-239 despite expensive attempts to keep Monju operational even after a couple of engineering disasters there (dropping the refuelling machine into the core is not a good move...)

What they do have are functional spent fuel reprocessing lines which could, if the claims about laser enrichment are true, produce sufficiently pure Pu-239 for a few bombs from existing stocks of spent fuel. It would take a bit more than a year to do so and a lot of people would be in the loop and in today's world of whistleblowers and social media it's likely the information about the project would leak out.

Remember though nukes are not made to be used, they're made as counters to perceived threats. Being able to nuke Beijing doesn't mean they HAVE to nuke Beijing, it's just something that the folks in Beijing have to take into account when they sit around the table with a nuclear power.

What the Japanese do have is a more outward-looking defence capability, no longer limited to their shoreline since it's clear modern threats can appear from over the horizon at short notice hence their new aircraft-carrier-launched strike fighters, the extended-range deepwater Soryu-class subs, the P-1 marine patrol bomber and such.

378:

Definitely one of my favorite science-fictional bits.

379:

And I for one am glad it's only fiction!

380:

Why not?

Infrastructure.

Back in the 90s, someone gave me the plans for a nuclear warhead in poster format as a gag gift. I had it up in my office for many years, which amused the physicists who passed through my office (long story). The thing wouldn't work (for one thing, the firing circuit was the sort of fractal short-circuit that only an engineer taking the piss could come up with), but it made two points abundantly clear:
1. Designing a nuclear warhead is easy. Multiple people, including IIRC high school students, have done it. Yes, making one that can be launched by an ICBM is hard, but designing one that could fit in a cargo container takes not much skill.
2. Getting the materials to make a bomb is almost impossible, which is where the humor of the poster came in (Examples: For heavy water, take ordinarily tap water and distill it over and over and over and over and over again. For U-235, buy a lot of uranium yellow paint and distill out the U-235. Sourcing and milling the plutonium will be hard, and we're not going to tell you much about making the high explosives for the lens, except you'll need some weird exotic atmosphere for the reactions.)

And that second one is the point: nukes take big infrastructure, and if you're looking for people assembling bombs, look for the infrastructure they need to do it.

Sadly. post 9/11, I'd probably get locked up on terrorism charges for having the poster on a wall, and I have no idea where it is now. Probably I threw it out during a move. Oh well.

381:

Here's an obscene counter-factual/alt-history.

Let's assume that the peaceniks above are right, and that the US could have got Japan to surrender without nuking Hiroshima and Nagasaki. What would have happened next?

1. The Korean Peninsula would have been entirely communist, while Japan would have been divided a la modern Korea, with a communist north and a capitalist south.* Depending on where the DMZ was relative to Tokyo, the Imperial family would either have been in exile somewhere in the south (probably Osaka) or Tokyo would take the place of Seoul.

*Korea was the recipient of Downfall rather than Japan, as the Americans rapidly and badly retooled the Japanese invasion plans to take care of the communist advance down the Korean peninsula. During WW2, Korean communists were the only group to fight against the Japanese Imperial forces in anything like a vaguely organized fashion, so absent the US bulling in from the south, the Korean peninsula would almost certainly have become entirely communist. Similarly, we can see how Downfall played, out, so saying something similar would have happened in Japan is straightforward.

2. Given how fast the Korean war blew up after WW2, a divided Japan would have been the locus for a war between the US and USSR in the 1950s.

3. Given that nukes existed, and given that the secrets to their construction were leaked shortly after WW2, it's likely that the USSR would get nukes about as fast as it did in our world: 1949, give or take.

4. Because no one had seen nukes used in anger, the war on the Japanese peninsula would have turned nuclear. So in addition to the mess of Operation Downfall (5 million Japanese dead was the estimate), they'd have a nuclear war the next decade. Sayonara to Japan as an industrial power.

5. Per Graff's Raven Rock: The Story of the U.S. Government's Secret Plan to Save Itself--While the Rest of Us Die*, there were all sorts of interesting plans to make US cities proof against nuclear war in the 1950s, including burying them. In Alt-World, after Japan is comprehensively destroyed in a Pyrrhic Japanese Atom War (and quite honestly, it doesn't matter who won Atom War I), we can assume that cities on the US West Coast and elsewhere would get very serious indeed about hardening themselves against nuclear assaults: those who could went underground, with more cities doing it as the range of missiles extended.

*Note that FEMA's predecessors tried to save US civilians from nuclear war. They also tried to save all three branches of the US government from nuclear war. When everything except some plans to save the Presidency were obviously failing miserably, they retooled to save the President. When those plans were shown by 9/11 to be unworkable, they retooled again to come up with a long line of successors--and to make Post Disaster US an authoritarian dictatorship with an appointed president perhaps someday responsible for reconstituting the Legislative and Judicial Branches.

6. There's no reason to assume that a limited 1950s nuclear war would have stalled the petroleum economy, so climate change would have roared along in Alt World, possibly a bit more slowly than in this world, as rampant consumerism doesn't necessarily play well with the more limited space of underground cities, and rampant warmongering might have taken the place of rampant consumerism entirely (there are rampant parallels, with missiles being the ultimate consumer goods). Still, I suspect that Alt World would have serious climate change problems by, oh, about now.

7. However, underground cities are better able to deal with certain climate extremes (like Black Flag Weather) than are the current car-centric sprawls we have now. So, ironically, a world scarred by nuclear war might actually be better able to deal with self-inflicted climate change than ours is. Of course, they'd be no better at getting off petroleum or not having horrible wars, but that's irruptive civilization for you.

382:

.. the sky across an elipse a couple of thousand kilometres long heats up and glows radiatively with a black body temperature in the thousands of degrees, baking everything on the surface and bringing the surface waters to a rolling boil.

But luckily, not everywhere. Chicxulub left the Mississippi drainage area growing nothing but ferns (because spores survive better than seeds). But the area was repopulated later from refugia in eg Alaska.

The brief high temperature preferentially killed animals that were out in the open. A lifestyle involving dens, or holes in trees, saved many species - but of course, that's the little guys, and mid-size guys. Big critters, well, oops. Birds were little.

383:

rocketpjs
Looks as though that might not be necessary ... given reporting today from a (breifly) Trump-appointee ( Scaramucci ) stating that the evidence is such, that even a solid Retuglilizrd support for DT won't be enough ....

DonL
I remember reading that no animal over 70kg survived & very few above about 30-40 kg

384:

And for a real firestorm, nothing quite comes close to a multi-kilometre-diameter asteroid impactor in the opposite hemisphere.

Also R-bombing, which IIRC was a thing in A Fire Upon The Deep.

Currently about R-bombing, there's this, https://www.quantumvibe.com/strip?page=2033, where a large object is accelerated to 0.9c by means that appear to violate conservation of energy, but oh well.

However, there's always the possibility of giving a Kuiper Belt object a nudge to send it on a well-aimed orbit into the inner solar system. It would take a while to get to Earth, but one assumes the perps are far-sighted.

385:

Pigeon @376:
You say that like you think they don't? They have the nuclear materials, which is the hard bit. They have nuclear physicists. You can simulate your design by computer these days well enough to have confidence it'll go off without ever having to bend metal.

I would not deny any of it, it would seem that Japanese have all the components that are needed to make atomic bombs, yet they do not have atomic bomb. US has all the materials to build Space Shuttle and yet no Space Shuttles are working. China has all the technology to fly to the Moon, yet moon is as far away as it's ever been. Basically that means that if they even hell-bent on that and pour a lot of money in it, it's not going to be another decade until they roll out first working prototype. Not enough experience and practice, and there's a great chance that even though the bomb is assembled and tested in separate parts within designed parameters, the final result very likely may as well fizzle out. Always a chance, because the real engineering only works in practice. To work in practice you need people who know what they are doing and have an experience in that.

You can spend as much time as you like making sure the design's right, then when it comes to making the real thing that's a matter of precision engineering and metallurgy, which the Japanese are really good at.

I am an engineer, this is my job, so believe me, nothing ever works completely straight even in plain old electric engineering. This is one of the mistakes that venture capital makes regularly - investors think that our knowledge of the Universe is so complete and mighty we can pour several billions into project, churn some data through some supercomputers and deliver the complete results right away. Remember that big thing called ITER? Sure it is the spearheading project, but we are stuck with this project with DECADES of experience and have no idea how good it will work. Now, since nuclear weapons are classified, anybody who thinks they are good at everything that makes the bomb go boom, will be spearheading into the classified area with unexpected results.

Another words, if there's some "super-secret" facility that would super-secretly produce materials needed for nuclear device, it is only possible if it's been around long enough (decades) and/or it is covered by some powerful allies. They need specialists that need not only know how to make certain details with certain data, but also HOW EXACTLY to implement it and WHAT EXACTLY is the method and what to do if the sample is FUBAR, if the industrial machine that delivers the product isn't to the same specification as the laboratory one, etc. As a person who recently survived (sort of) through first engineering audit, hell of a job, I say that!

Also this proverbial example: https://en.wikipedia.org/wiki/FOGBANK

Here's the good reading for those who are interested in this topic:
https://web.archive.org/web/20111206183617/http://wrttn.in/04af1a

386:

.. there were all sorts of interesting plans to make US cities proof against nuclear war in the 1950s ..

The design of the interstate highway system had a lot of military input - for example, there were straight sections of highway, specifically to be emergency landing strips.

More importantly, there was an effort to decentralize America. A side effect of that effort was the growth of suburbia, and the poor service to inner city slums.

387:

That bit gets a bit complicated. And yes, I do work on land use issues.

Yes, I agree that the interstate system has some serious military uses. That's been true of every nationwide road and rail system ever built. As for decentralizing cities to cope with nukes, that was only true until hydrogen bombs got big enough and plentiful enough that sprawl was an inadequate defense. Sprawl has rather more to do with things like farmland being cheap, land value going up when its converted from agriculture to high density residential, houses needing less water than crops, and cities getting a lot of property tax revenue from new developments. This is why inner city redevelopment and densification all are much, much harder to do, because it has none of the advantages of sprawling onto undeveloped land.

More seriously, Raven Rock is worth reading to see all the ways (military) planners tried and failed to come up with ways to save people from nuclear war. In the end, even their plan to assure continuity of government may well be unworkable, since it depends on some set of unknown dudes* a) surviving nuclear war, then, b) opening the sealed envelopes that tell them what their new jobs will be (aka "You are Now President!"), and c) enough of the survivors following them that they can assemble a functional government.

The parallels with planning for any disaster are painfully obvious.

*Most of the people in the line of succession are reportedly current or former politicians. Unfortunately, Raven Rock was written under the Obama administration, so it's unclear what, if anything, the US is prepared to do if the current White House Denizen gets nuked.

388:

Your Honda/Toyota/whatever if assembled in Japan likely has things like a wiring harness assembled in someone's basement. To this day families get supplies of wires with connectors, tie up widges, a layout board, and other assorted things and created harnesses for the nearby plant. 10 to 30 per week per household.

389:

It's also more resilient as long as your enemy still thinks hitting the factory is what it's all about.

Given they started the war against the west with the plan that we would agree to pull back and ask for an armistice, air defenses in the homeland didn't fit into the plan. Now not everyone was on board with this plan, (see Yamamoto), but it was the plan.

390:

Which has been going on longer than you give credit; ask Scots historians about "Wade roads", and/or British archaeologists about "Roman Roads", and clear your evening...

391:

Frankly, if not for the political pressure to conduct a second raid so soon after the first, it would have been more sensible to abort the mission and try again a day later

From what I've read about the construction of those first bombs their shelf life once assembled was very short. Measured in weeks or even days. They were the product of a science lab, not an assembly line.

I saw a talk by the author of a book written about the US nuclear deterent from 46 to 50. It took a while to gather up all the lab notes and turn them into something even a skill worker could build. These notes were not exactly "step 1, step 2, etc..." Once the war ended most of the folks at Los Alamos beat feet in a hurry to get home.

392:

Still, I wanted mention that even though it comes up again and again that people are defending US actions in later stages of war, it is very much open to everyone that these actions were not dictated by military necessity, but rather from opportunism - to destroy the independent industry capacity and the will of the people in anticipation of post-war occupation and politics in the world. Just a cherry on top of the whole destruction - and, uh, it's the very profitable one at that.

Nope. Neither the documentation in the US nor the documentation in Imperial Japanese archives supports this view. On the US side, this really was about ending the war with minimum loss of life on all sides.

Since the US helped rebuild Japanese industrial capacity rather rapidly after the war, with the result that the current Japanese GDP is over three times the size of the Russia GDP (which in turn is only a half billion larger than the GDP of South Korea, which the US also helped rebuild), I'd say the results speak for themselves.

393:

Neither the documentation in the US nor the documentation in Imperial Japanese archives supports this view.
The only problem with that is the fact that US very thoroughly occupied Japan immediately after the war was over and in fact still do so in present. Loyalty of this nation, as was mentioned before, has been bought by allowing current imperial family (if I formulate it correct) to stay in power post-war.

Since the US helped rebuild Japanese industrial capacity rather rapidly after the war, with the result that the current Japanese GDP is over three times the size of the Russia GDP (which in turn is only a half billion larger than the GDP of South Korea, which the US also helped rebuild), I'd say the results speak for themselves.
Doesn't have to be self-contradictory - the US is known follow the certain idea of market expansion/regulation known as "digging holes and filling them up again". Destruction of local industry by lethal force is the easiest and most direct way of market competition and US applies it everywhere it can, especially when it comes to Nazi Alliance activity.
https://www.cato.org/publications/commentary/destroying-serbia-order-save-it

Japanese industrial and economical capacity as well as their GDP is very well connected to US willingness to support (and the dept they accumulated) it and many people still consider it a fair exchange. Possibly. Possibly not, in the wake of situation of economic stalemate Japan is stuck in ever since the 90s. The question is still, is the large and healthy economy really useful for anything if it is so dependent on the foreign power?

394:

ask Scots historians about "Wade roads"

Or listen to this verse of "God Save the King":

Lord, grant that General Wade,
May by thy mighty aid,
Victory bring.
May he sedition hush,
and like a torrent rush,
Rebellious Scots to crush,
God save the King.

395:

“ The only problem with that is the fact that US very thoroughly occupied Japan immediately after the war was over and in fact still do so in present. Loyalty of this nation, as was mentioned before, has been bought by allowing current imperial family (if I formulate it correct) to stay in power post-war.”

Where is your competing evidence then ?

Also the idea that you are criticizing others for an ideological skew to their view of history is pretty laughable. Very much pot calling the kettle black. You are about as politically brainwashed as anyone I have ever met

396:

True, with the note that you can still find some of Wade's military roads in the Scottish Highlands, witness "an gearastan", the Gaelic name for "Fort Willian", which translates directly as "the garrison".

397:

Thanks for that last link. I’ve seen such on a much smaller scale.
If humanity still has a mechanized civilization in a hundred years, “industrial archeology” will be a recognized and respected profession.

398:

Yeah, because of course Russia or its predecessors never occupied anybody, even after the horrors of WW2.

399:

Hate to say it, but sr’s comments here were very predictable, almost algorithmically predictable. Not that it means much, it just isn’t worth spending a lot of time arguing.

400:

The roads Wade created into the Highlands were primarily to carry artillery -- there had been military roads since Roman times with way-forts in places like Carpow in Perthshire which endured for hundreds of years but they were for light wagons, horses and mules as well as men on foot.

401:

Part of that is because a lot of them are correct, God help us all :-( In the posting you referred to, the only paragraph that isn't is the last, though it was true up to about half a century ago!

Even in the 1990s, some 'strategic' Japanese exports were contrained (and, to a great extent, controlled) by the USA. We did a supercomputer procurement then, and I was surprised that the conditions imposed by a Japanese vendor on a UK sale were those imposed by the USA, which had to authorise any changes. The same is true for exports from the UK, but that's another story :-(

402:

Export conditions were because the US was the source of certain components such as high-end processors and the like and the export restrictions on those parts to Japan for integration into systems precluded the sale of such items onwards without similar restrictions being imposed.

403:
Basically that means that if they even hell-bent on that and pour a lot of money in it, it's not going to be another decade until they roll out first working prototype.
1938 to 1945 is less than a decade, and we're way beyond 1938 in even public knowledge these days.
404:

We have similar things going on with aircraft sales. Boeing and Airbus have technologies that come from each other. And ditto their suppliers. So aircraft sales in almost any direction to any country get national interests all in a tither all the time.

405:

And no need for rooms full of ladies running fancy mechanical calculators to do 1000s of long division calculations for days or weeks at a time.

406:

ITAR I presume?

407:

to Damian @401:
Hate to say it, but sr’s comments here were very predictable, almost algorithmically predictable.
Doesn't really matter what comments there are if your response is always strawman arguments, i.e mostly plain denial of things I wasn't even talking about.

to Elderly Cynic @403:
In the posting you referred to, the only paragraph that isn't is the last, though it was true up to about half a century ago!
Well I was referring to a certain very large US base that is located on Japanese territory. Though as was noted several times here before, nowadays it is rather difficult to make difference between exclusive US interests and the globalist ones. Well, certainly not from my viewpoint.

to John Hughes @405:
1938 to 1945 is less than a decade, and we're way beyond 1938 in even public knowledge these days.
That is indeed a very disappointing tendency among newer generation who think modern science is so ahead in the progress they can build space rocket in the garage if they desire so, or maybe even in a free time instead of watching funny cats videos. They can't. They can't even remember that the current generation was thinking almost exactly the same thoughts for about 20 years already.
http://www.spaceref.com/news/viewsr.html?pid=3698

408:

Supercomputers have applications in nuclear weapons development as well as signal processing for radars, encryption and the like so the sales of sufficiently powerful processors and high-speed networking components have been limited in the past. The days of 387 co-processors being on the banned list are long gone though.

409:

Don't know how trustworthy it is, but I ran across a story about Amazon's Alexa telling someone they should stab themselves in the heart for the good of the planet.

https://www.thesun.co.uk/tech/10585452/mum-amazon-echo-speaker-kill-herself/

410:

Re: 'Even in the 1990s, some 'strategic' Japanese exports were contrained (and, to a great extent, controlled) by the USA.'

I see two reasons for Japan just not bothering with the US:

1- With the US's share of Japan's imports and exports in continuing decline, the general distrust of/disrespect for DT, Japan signing the TPP11 (the US walked out) last year, kinda hard to sell voters in Japan on US-imposed trade rules.

2- Prime Minister Abe is to-date the longest serving PM: he's a nationalist, revisionist, and his economic views sound kinda similar to DT's. Basically we have a perfect set-up for a prisoner's dilemma deal apart from the little detail that DT's screwed over pretty much every business 'partner' he's ever had. (No idea whether Abe is a backstabber. Based on a couple of Japanese inspired sagas that I'm only passingly familiar with, backstabbers are acceptable leaders provided 'ruses' are considered aspects of their strategy.)

411:

Oh, really? As far as I know, ALL of the components were Japanese designed and manufactured - and, given what I know about the company and system, that is what exactly I would expect. Even back then, Japan's technology led the USA's in such areas.

Furthermore, as the vendor explained it to us, it had been imposed as a constraint on Japanese companies as part of allowing international contracts between them and USA ones.

412:

The days of 387 co-processors being on the banned list are long gone though.

I remember when the Sony PlayStation 4 was on the ITAR banned list, though: an ordinary cheap PC today kicks sand in the face of a 1990-vintage supercomputer in most if not all dimensions of performance.

ITAR persists because it's kind of hard for US Congress to repeal/sunset old laws, especially in the security field -- the precautionary principle means there's no benefit from doing so if the law's no longer necessary and a huge penalty if you get it wrong and it was still useful -- but also because it handicaps competitors for American corporations. As Systime (in Leeds) learned the hard way when DEC went after them in the early 80s.

413:

Rocketpjs @ 378: I have a (probably) false hope that some elements of the Secret Service or possibly the US military have a quiet plan to take 45 'off the board' if he goes full bunker and decides to destroy the world.

If I understand how it works, there's NOT a big red button on his desk that he can push to launch the nukes. He's got a military aid with a briefcase full of codes (I'm guessing for various contingencies ... wouldn't want to nuke the Russians if there's an incoming missile from North Korea) and to launch Trump has to select the appropriate set of codes out of the briefcase. So I think there are a bunch of circuit breakers that might trip if he just went completely off the rails and decided to nuke California's 8th Congressional District.

But that does raise an interesting question. For all the fear that Trump might try to "wag the dog" with nukes if he really believed he was going down, WHO would he target? Who is he going to nuke if he can't nuke California (and I'm pretty sure SOMEONE would step in to stop him doing that)?

414:

The intellectual property rights to some parts of of a given supercomputer were not necessarily wholly-owned by Fujitsu -- they tended to use SPARC-based designs IIRC and that architecture was derived from Sun Microsystems, a US-based company. Earlier than that, bit-slice systems were common for vector-calculation machines and again the silicon and their designs were typically American-derived and under ITAR control.

415:

That's my understanding of how the nuclear "football" works too: he can initiate one of a handful of pre-planned responses, but that's it -- to actually do something unanticipated ("I want to nuke Malta! Why isn't there a plan to let me take out the Knights Templar?!?") requires a whole bunch of committee meetings and setup at a lower level, that would take days or weeks to prepare.

Other nuclear powers have different systems. The UK, for example -- the submarine on deterrent patrol has a letter from the PM in a sealed safe that takes both the captain and the XO's agreement to unlock (following certain conditions being met/signals received or not received). These orders govern what to do after the UK is confirmed to have received a nuclear first strike. Meanwhile, the PM can't actually order a nuclear first strike without an act of parliament (or, allegedly, the cooperation of the Duke of Cornwall, thanks to an obscure loophole) -- the Nuclear Explosions Act actually makes doing so a criminal offense on a par with treason or murder.

416:

paws4thot @ 392: Which has been going on longer than you give credit; ask Scots historians about "Wade roads", and/or British archaeologists about "Roman Roads", and clear your evening...

The origins of the Interstate Highway System have a lot more in common with the reasons the Roman's built roads throughout their empire than they have with serving as emergency runways for an Air Force that didn't yet exist when the planning started back in the 1930s. They are primarily LOGISTICAL, rather than strategic or tactical.

https://www.fhwa.dot.gov/infrastructure/convoy.cfm

Eisenhower understood the benefit of good roads long before he saw the German Autobahn.

Not to change the subject too much, but I think it would work better if the government owned the rails just like they own the highways and that Railroad Companies should pay taxes to use the rails the same way Truckers (actually anyone who has a motor vehicle) pay taxes to use the roads

417:

I’ve seen ITAR requirements mindlessly flowed down until there’s a “No Foreign/Keep filed when not in use” stamp on the source control document for a MIL-SPEC washer. (For those who don’t know, MIL-SPECs are freely available for download at dla.mil.)

418:

David L @ 393:

Frankly, if not for the political pressure to conduct a second raid so soon after the first, it would have been more sensible to abort the mission and try again a day later

From what I've read about the construction of those first bombs their shelf life once assembled was very short. Measured in weeks or even days. They were the product of a science lab, not an assembly line.

I saw a talk by the author of a book written about the US nuclear deterent from 46 to 50. It took a while to gather up all the lab notes and turn them into something even a skill worker could build. These notes were not exactly "step 1, step 2, etc..." Once the war ended most of the folks at Los Alamos beat feet in a hurry to get home.

I don't think that was really a factor in the Nagasaki bombing. If they'd known the aircraft was going to have trouble on the way to the target, they most certainly would have postponed the mission until the problems could be corrected. Same thing for if they'd known the primary target was going to be obscured by weather. Delaying the mission for a couple of days wouldn't have been a problem. The main factor was the device that could not be easily disarmed once armed, along with the scarcity of usable devices at the time and the fact that for whatever reasons the device couldn't be jettisoned. Once the mission was launched & the device was armed, it HAD to be used somewhere.

As the saying goes ... Once the pin is removed, Mr. Hand Grenade is NOT your friend!

419:

More specifically, see the history of the 1919 Motor Transport Corps trans-continental convoy, which was observed by one Lt. Col. Dwight Eisenhower. "The actual average for the 3,250 mi (5,230 km) covered in 573.5 hours was 5.65 mph (9.09 km/h) over the 56 travel days for an average of 10.24 hours per travel day." Which tells you something about the state of the US road system in 1919! It's noted that "practically all roadways were unpaved from Illinois through Nevada."

So, no surprise that the US interstate highway system got its biggest impetus during the presidency of one Dwight D. Eisenhower (who had also been the supreme allied commander in charge of the western allied force invading the Third Reich in 1944-45).

420:

their shelf life once assembled was very short

And here is a picture of a man about to assemble the Nagasaki bomb. That thing in his hand is the plutonium core.

421:

I'll play devil's advocate on the timing of the nuclear bombings.

The problem the Allies faced is that late summer is typhoon/storm season in that part of the Pacific. They would have had to wait for months to get reliably clear weather. Nagasaki especially was considered marginal from the get-go, but on the other hand, I'm pretty sure they had no expectation of better weather showing up any time soon after that.

"Fortunately" for Japan, the islands didn't get stuck in a situation where the only area clear of clouds was over Tokyo, because I suppose the Allies might have even gone for that if it was the only target available.

422:

You realize that you're giving me odd story ideas. Let's see, WWII ended with national collapses*, and in the end, the US balkanized. East Coast, West Coast, South, Texas (trust me on this), and, oh, yes, the "Real US", capital in Kansas, and it has a navy....

My father got out of the US Army Air Corps in '46, I think, and he told me that there'd been mutinies, or near-mutinies, over me wanting out at last, and rumors of fighting the USSR....

423:

60% were "Mac only"?

So, they were buying into the advertising, and getting overpriced commodity hardware? Meanwhile, I'll wager the administration was also not supporting Linux, or anything but M$.

424:

"Earlier this decade" puts it in the cross-hairs for the Windows Vista era, which was an abomination: meanwhile, the Intel Macs were at a peak. OSX was relatively solid -- the dev team lost their edge later -- and the unibody macbook construction was way better than most PC laptops of the day and they hadn't borked their keyboard mechanisms: also, the retina displays had just come in. There was a window of a few years when Mac laptops were clearly better, even if more expensive, than the Wintel equivalent -- rugged unibody construction and reasonably fast OS vs. old-style laptops with plastic/laminated bodies and Vista.

Meanwhile, university IT departments, like every other IT department, lag behind user demand by at least two iterations.

425:

As I said, ideas for stories. The Unpleasant Colonies start a revolt against the Terran Confederation, and the Confederation pulls out one of its emergency plans, and sends out the hyperlight signal, and the first thing that happens is that all the colonial life support systems start to shut down, and will refuse to respond until they surrender.

None of the colonies, of course, have fab capability....

426:

Back then, yes, it would have been unlikely.

As I noted in my last post, there were mutinies and near-mutinies in the US Army around '46 - the troops wanted to go home. I have *zero* doubts that that was the case in a lot of other nations.

The GOP, on the other hand, loves funnelling tax dollars through the military budget to them and their friends.

427:

I have a lot simpler and quieter answer: I pull out my electric mulching mower, after most of the leaves are down, and mow. 85%, at least, of the leaves are shredded, and stay where they are, putting nutrients *bacK* into the soil, instead of taking it away when the locality collects the leaves.

https://www.beliefnet.com/inspiration/2004/04/did-god-create-lawns.aspx

428:

You wrote:
The container ships packed full of cruise missiles is likely a fantasy
---
Really? I read, a couple years ago, that hedge funds were *paying* to keep tanker ships full of oil out in the ocean, to keep the price up, and we're talking in the last 10-12 years.

You don't think there's one in a hundred ships coming from China that don't have containers, and people on the ships who would use the containers?

And in that case... there are no defenses. Fire 'em from just offhore, and they're at their targets in less time than an ICBM.

429:

Trust me, there are huge fights over zoning.

If you haven't read it, may I recommend a book from the sixties, written by the city planner Robert Moses' mortal enemy, Jane Jacobs: The Death and Life of Great American Cities.

Mixed-usage zoning/housing is a Great Thing. Single-use zoning sucks dead Republican roaches.

430:

Yeah, I don't think we know what the football does any more.

The last time it was used (and I think I got this out of Raven Rock, Bush II admitted later that he sat down with the football holder, both of them read through the material, and they realized that they had no option for dealing with a major terrorist attack on New York, aside from scramming to the Nightfall 747 and going into full-on evasion mode in case more planes were inbound. Cheney was in the White House bunker coordinating the defense while Bush scrambled to safety, per protocol (with the door open, since it turned out the designers had no clue how to set up proper communications into their bunker, so that if the door was closed, the decision maker was cut off and left with inadequate staff).

Anyway, we're now 18 years on from that. It's probable that there's a bigger, better bunker under the White House (they built something down there), and FEMA under Obama rewrote the rules for how a nuclear war would be conducted, based on what Bush and Cheney went through. My understanding under Obama was that the President, VP, and SecDef would, in event of emergency, man their posts and coordinate the response until they died, at which point the C-Team backups would be notified that they were now in charge. There's a B-Team per the Constitution, but since most of them are in DC most of the time, the guess is that during an actual emergency, they'd be dead too. The C and D teams in line of Presidential succession are located far away from Washington, and we have no idea who they are. That's part of the problem, in that if Washington is slagged with all the recognized people in the chain of command dead (President, VP, Speaker, Senator Majority Leader, Secretaries in order), then it's an open question whether anyone will follow whoever's next on the secret list.

So anyway, if I was guessing, I suspect the nuclear football now has protocols for various terrorist attackss, probably a protocol for someone releasing a pandemic at a major airport, and I don't know what else.

Whether the current denizen of the White House has done anything to change the system??? That's one of those questions I really hope I don't find the answer to. The US would become even more dysfunctional under Notional President DT2 or Notional President Giuliani, either option with the Bill of Rights suspended and no functional legislative or judicial branch. I suspect the emergency that brought that about would also include a side dose of civil war, along with public health break downs and break downs in the food distribution network. The death toll would be rather high. Biblical, even.

431:

Yeah, but building the gigantic Bergenholms to allow you to do that is a bear.

Though I will admit it's a hell of a lot easier than going to the parallel universe where c is the *minimum* speed, and doing it one one or two of the planets there.

432:

But you forgot the really hard part. I think it was around '80 that Mother Jones, or someone like that, published the story of the high school guy who, using unclassified plans, had designed a nuke. The one show-stopper was having to get the radioactive slurry in a bucket, then spinning it around in your living room for half an hour....

433:

You missed a major issue: with the nuke war, and some underground cities, suburban sprawl would never have happened, with the result that you wouldn't *have* 76% of the US driving to work alone in a car every day.

https://www.brookings.edu/blog/the-avenue/2017/10/03/americans-commuting-choices-5-major-takeaways-from-2016-census-data/

Therefore, fossil fuel usage would have declined. For that matter, with all the radiation, no one would worry about building more nuke plants faster....

434:

But that does raise an interesting question. For all the fear that Trump might try to "wag the dog" with nukes if he really believed he was going down, WHO would he target?

Russia, perhaps using one of the less-than-major attack options.

US nukes Russia, Russia in response nukes the US. Indirect, but effective. Somewhat like suicide by cop.

435:

Even if the tax income means you effectively have infinite money? :-)

436:

I tried to follow that link. After I told noScript to allow it, it was still the case that all I saw was a blank page, with *nothing*.

CentOS 6, firefox 60.9.0esr

437:

SR wrote:
This is one of the mistakes that venture capital makes regularly - investors think that our knowledge of the Universe is so complete and mighty we can pour several billions into project, churn some data through some supercomputers and deliver the complete results right away.
---

And if anyone disagrees with this, you're living in a fantasy land. I swear, most upper management thinks that you "make" something by waving your arms, pointing at things with a mouse, and *poof* a miracle occurs.

438:

Single-use zoning sucks dead Republican roaches.

Now you're letting your politics color the facts. D's can be just at stubborn about zoning when it impacts THEIR BACK YARD. Want to come by my neighborhood and discuss the issue. If lucky you might get away with just a few feathers stuck to the tar. My area is 3 to 1 D over R.

439:

Whether the current denizen of the White House has done anything to change the system??? That's one of those questions I really hope I don't find the answer to.

I would find it hard to believe he would/could consider or allow others to consider a situation where HE'S NOT IN CHARGE.

440:

it's unclear what, if anything, the US is prepared to do if the current White House Denizen gets nuked.
---
Unfortunately, we live in the near-'burbs, or we'd be throwing the biggest party ever.

441:

Russia

Nope. Ukraine.

Then he could claim forever that they "did it in 2016", HAD the server (now melted) and so on.

442:

I was not talking about Fujitsu, and my points stand.

443:

Yeah it’s kinda like a ballistic missile sub only a million times shittier

Not to mention you have to figure out some way to actually launch them, presumably launch all the missiles simultaneously and keep them from being detected by various radiation sniffing apparatus which doubtlessly exist and doubtlessly are watching container ships since everyone is scared the terrorists are going to sneak a nuke in that way

And if they get detected they are firstly going to get sunk and put a fair amount of your nuclear deterrent in the bottom of the ocean secondly going to trigger a major diplomatic incident

And oh by the way are going to be utterly useless if you have a problem with some other regional power like say the Russians or the Indians or the Taiwanese

Seems an utterly strange thing to do with the few warheads you have. Not saying it’s totally crazy but if China cared about such things they’d probably just build more boomers

444:

Part of the emergency continuity plans is a fair amount of pre signed executive orders that are sitting in the various emergency command centers (Raven Rock, Mt Weather, likely some unknown sites )

They also make a pretty strong effort to never have the entire line of succession in DC at the same time

Still a lot of possibilities for confusion though

There is actually a TV show called “designated survivor” that explores what happens when DC gets nuked

445:

it's unclear what, if anything, the US is prepared to do if the current White House Denizen gets nuked.

I don't know what the current arrangements are, but in classical times (pre-1991, roughly) there was a set of pre-delegations in place whereby various CINCs could use their tactical nukes at discretion to respond to attacks in progress. At the strategic level -- and I think this is still true -- the Airborne Emergency Action Officer on Looking Glass could order his on-board battle staff to execute a SIOP attack if the National Command Authority wasn't available anymore.

Those were days when the question of "Who did it" had at most two answers and the AEAO and his folks basically had to figure how much retaliation to inflict on the perp. The range of possibilities is much broader these days and I can well imagine that the Looking Glass crew might be left scratching their heads and wondering what to do and to whom.

446:

Actually, per the book Raven Rock, they're no longer using Raven Rock or Mt. Weather, because a) they're too well known, and b) they're not strong enough to survive a predictable attack. There is evidence of at least one new site, but you're going to have to buy the book to find out where it is. There was also a spate of bunker building under DC in the Bush and Obama years. Who knows how far down that goes?

And yes, I know about Designated Survivor. That's the official plan (I listed it as Plan B in my post above). Raven Rock made it pretty clear that there was a secret plan under Bush II and Obama, simply because most of the highest level officials in those administrations demonstrated that they'd rather die defending the country than do what the protocol demanded, which was to run for secure cover, then wait to assume the job of president should everyone above them die. It turned out that the emergency planners hadn't spent much time talking with the people they were planning for, and didn't take their willingness to sacrifice themselves for the country into account. Much as I dislike the Bush II people, for literally trillions of reasons, I will grant that they were willing to die at their posts.

The Obama-era plan apparently assumed that all the first-line people would die trying to organize the initial response/counter attack. The people who would pick up the pieces afterwards were secretly briefed and identified to FEMA. The idea is that this secret government would take over afterwards. In this regard, the whole paranoia about FEMA instituting an authoritarian "New World Order" is extremely well-founded. My feelings about it are a) FEMA and its predecessors have generally proven to be crap at disaster preparedness, so I'm not sure all this contingency planning will be any more useful than any other bit of 60 year-old toilet paper*, and b) Congress and the courts failed even more abysmally at planning to secure themselves from nuclear war**, so FEMA may be trying to work with the best of a really bad set of options, and trying to secure some sort of executive authority after everything else falls apart.

As for what the current regime would do in an emergency, again, I have no clue, except that I don't think there's a good bunker under Mar A Lago. I have no doubt that there are people in the current administration who are willing to sacrifice their lives for the good of the country. Unfortunately, I have no idea which of the current crew fits that bill, and whether they're in positions to do anything if the proverbial balloon goes up.

*Apparently in the football are some contingency documents hand-written by JFK around 1961. It says something that their planning effort still revolves in part around decades' old secret, handwritten documents.

**Some of the bunker building of the last two decades might conceivably been an effort towards securing the Legislative and Judicial branches. Previous plans for protecting Congress and the Supremes involved evacuating hundreds of critical personnel by car or bus out of DC with threats inbound and due in minutes. The chances of this happening were somewhere between fat and slim. One congressman at least got four fast motorcycles and made sure everyone in his family could ride. His survival plan amounted to getting home, getting on a bike and weaving through gridlock, to outrun the blast.

447:

If you would tell us the manufacturer of the supercomputer (I recall both Hitachi and NEC made products in that field a while back) and maybe hint at the architecture involved then maybe we could find out if ITAR applied to their products due to technology transfer. Until then your point (the Americans fdorced the Japanese to put arbitrary restrictions on their technology sales) is, well, pointless.

448:

Re: ' ... highest level officials in those administrations'

Okay, so we have a bunch of senior execs surviving.

Do any of them know the nuts & bolts of whatever department they head? (I'm guessing they're probably more knowledgeable about the financials than the actual services.)

Would they even know any of the middle managers apart from occasionally seeing names on some org chart? I'm guessing that 99% of the delivery of federal department services is performed by minions and that it's very unlikely that any of the C-team worked their way up the corp/gov't services ladder.

Or do you want all gov't departments FEMAnized?

449:

No, I was simplifying. The Obama White House reportedly handed out, in addition to their normal ID tags, a set of special tags that went to some of the employees. Those people had roles to play in the post disaster continuity of government. Those without th