Back to: A message from our sponsors | Forward to: Interview


So how vulnerable to disruption is our modern globalized networked economy? Really?

According to IEEE Spectrum, a 70 millisecond power drop in a single factory is going to cause a 7.5% reduction in shipments of FLASH memory over the next two months. Worldwide.

(FLASH memory is rapidly encroaching hard disks as the mass storage medium of choice for our computers. This affects everything from iPods and cameras to laptops and servers.)

The 0.07 second drop in voltage from the grid coincided with a failure by the uninterruptable power supply feeding Toshiba's Yokkaichi memory-chip plant in Mie prefecture. This in turn wrecked everything in production. Toshiba expect to ship 20% less NAND flash memory over the next two months. And Toshiba are just about the world's largest supplier of what's rapidly becoming a vital component of our key information processing tools. (More here, via the WSJ.)

(I'm now trying to remember the details of the resin factory fire in, I think, Taiwan in the early 1990s, that took out the world's largest source of the high purity resins used to make chip carriers, resulting in a 100% spike in the price of DRAM that lasted for a year or so ...)



That's one way of looking at the question. We can surmise that any industry that relies primarily on just in time inventory is susceptible to this, and the more global the supply chain, the bigger the impact. I guess you could say it's one potential downside to specialization that providers of specialized products or services could theoretically hijack, ransom, or seriously disrupt portions of the global economy (through malfeasance or negligence, not to mention natural disaster).


Yep. It's the problem of the economies of scale driving us towards monocultures - see also crop failures in crop monocultures driving up food prices. Failure isn't priced in (or rather we prefer to pay for it when it happens, rather than in advance).


On the other hand, assuming mediocrity, this sort of stuff must happen all the damn time in other supply chains. Why doesn't infrastructure fail more often?


That sounds pretty insane the way you put it.

Then again, I don't imagine a 7 millisecond power drop is all that rare an occurrence, even on modern power grids, but I'll cheerfully confess to knowing nothing about them, so maybe it is.

I'd be more inclined to blame it on the failure in their 'uninterruptible' power supply though - I'm guessing if that was working all would have been fine, but either way they'll be getting in some more backups in for that I reckon.

And to be fair, even if it was a full-on 24 hour power outage the net effect would have been pretty similar - everything *in production* was lost.

It does make you realize though how incredibly delicate & fragile semiconductor tech is, and yet we still churn the stuff out by the truckload & sell it for peanuts. Mass production does have something going for it.

Anyway, I am glad I just bought an SSD very recently - now it'll be several months longer before I see the same thing for half the price I paid!


"So how vulnerable to disruption is our modern globalized networked economy? Really?"

According to Nassim Nicholas Taleb, author of "The black swan" and the worlds ugliest home page, it is not only very vulnerable, our approaches to predict trends using statistics are utterly broken too.


On a smaller scale the same thing happens in Chemical/Pharmaceutical research and even production. For some chemicals there are only a few sources and if one has a problem availability will suffer.
Furthermore, the production of some basic chemicals is interlinked; lowered demand for a main product will also lower availability of products that are made in the same process which can have completely independent demand cycles.

A recent issue is the production of medical isotopes, for which there are two main production sites (and a couple of small ones). Usually maintenance of these facilities is staggered in order to ensure constant supply (of course you cannot stock a lot of these isotopes due to their half-life). But problems in the Dutch reactor while the Canadian one was due for maintenance threatened supply.


The thaw from the last snowfall was what Tuesday-Wednesday last week. One of my local supermarkets are still blaming a lack of fresh produce on "the weather". OTOH, it's the Co-op, so the chance of finding good fresh produce seemed limited to start with. ;-)


I think it may also be useful to examine supply chains, especially of critical materials, in terms of their estimated peak productions. That is, for all the things we dig up out of the ground (oil, natural gas, uranium, copper, gold, and all the other goodies that go into our electronics, medicines, and hosts of other things), we can assume them to be mostly non-renewable. Thus increasing demand for these materials causes an eventual peak production, after which prices rise sharply and continue to rise.


The issue with economies of scale is that for most everything, the demand isn't actually large enough to make two plants. Its a fallacy taught in economics classes that most things are not natural monopolies, but our current ability to design efficient production processes means that for most manufacturing, the plant sizes aren't big enough. There certainly shouldn't be more than one car manufacturer, one chip manufacturer, one airplane manufacturer, and I'm not saying one company, I'm saying one plant. (per output flash and cpus coming off the same line, no pickups and hybrids either) but there are multiple plants, and in many cases its just a function of antitrust law, which should not be seen as a bad thing. Frankly, we need more of it, and we need to be gimping our production a bit for it...or we need billions and billions more people in the demand pool...

(FLASH memory is rapidly encroaching hard disks as the mass storage medium of choice for our computers. This affects everything from iPods and cameras to laptops and servers.)

!?!?!?! Surely that can't be right - flash memory degrades over time, yes? Although it would be interesting to see what the specs would be for a computer with no hard drive. Maybe I'm misreading, and this is in terms of gigabytes sold? That sounds a lot more believable - I've got novelty company pens that come with an embedded one-gig flash memory.


(Nit: 70 msec, not 7 msec)

It does happens all the time, and UPS's are a lot less reliable than the 100% people seem to pressume.

A large part of the problem is that people do not test their UPS systems, fearing the costs of the possible failures will be blamed on them.

My personal limited experience is that unannounced tests, deliberate or accidental, have 20-40% risk of failure in single-UPS installations, and 5-10% risk if more than 2 UPS are installed at the same facility.

I belive similar observations are behind Googles data center design, where each server has a 12V sealed lead acid battery of its own, and only the environmental and networking hardware runs on UPS power.

In addition to reducing the risk of catastophic failure, it also allows them to do power-work inside the installation, while it is running.

Obviously not a feasible architecture for a semiconductor plant.



As others have stated, the problem is not the networked economy, it is the concentration of supply in few or one vendors. It is also the dynamic of "winner takes all", where the "best" solution is used by everyone, driving out alternatives and creating single points of failure.

Modern history is replete with these failures in agriculture, energy production, manufactures and infrastructure services. What is interesting is that economics and vested interests do not learn from these mistakes and apply band aids instead.

We've just had a global financial meltdown, yet the response has effectively increased banking concentration in the US which will make us more vulnerable to another financial "glitch" in the future.

Charlie, should you ever decide to write about a star colony, it is likely to be beset with such failures as "best" solutions are likely to be the only solutions in such a situation.


#9 There certainly shouldn't be more than one car manufacturer Clearly. ;-) There is no practicable difference between something as hand-made as a Morgan, as made to order as a Rolls-Royce, and as totally "white goods for driving" as a Nissan Micra. I've just picked a few examples, but some makes of car really would cease to be what they are if you stamped them all out of a plant making all cars.

Ok, you can make some of the parts in common plants (eg, Morgan don't, and have never, had their own engine foundry).


All memory degrades over time. Hard disks with spinning platters eventually break, and there's probably some limit to reads and writes on the platters themselves. The question is not whether it will need replacement, it's how long it can perform reliably enough before requiring replacement.


# 14:

Sigh. Do you actually have something to add to the conversation other than trite and tautological observations?


"There certainly shouldn't be more than [...] one plant. (per output object."

An interesting concept, and one that is predicated on fast, cheap and reliable long distance transport.

It's going to be a bit of a bummer for the newspaper industry when they discover that all their editions have to come off the same presses in Ulan Bator, and they never get them to their customers the following morning.

I can also just see all the world's false teeth being manufactured in Lichtenstein.

(As opposed to just a very large proportion.)


It being a tautology, why did you ask your question in the first place?


@ 17:

What part of "all memory storage schemes eventually degrade with time" don't you understand? And just what do you think it adds to the conversation? Particularly when it's fairly obvious from the question (and real life) that flash memory degrades significantly faster than disk? Please. Grow up.


And by the way, for people like abhelton and bellingham who don't seem to be aware of the phenomenon, here's a link to flash memory degradability


SoV (this means you), please use the "Reply" link when replying to someone else's comments. It makes it easier to see who you're talking to ...


That's about Flash write endurance, not about degradation over time. (Unless I missed it -- but that's the title, and I read the whole thing.)


Ah, sorry Charlie. I had thought that if my replies immediately followed the person I was responding to that this would be a redundancy.

Anyway, what's the scoop on flash memory these days? I know that I've personally already had one go bad on me and this seems to be a fairly common occurrence with people I know for what seem to be fairly basic physical reasons. But then, those drives are from around 2006 or earlier. The last drive I bought (less than a year ago) seems to be doing okay, though it does seem to take a while for the on-board software (cruzer, 8 G, $15) to boot up. I'd erase it, but I'm afraid it's responsible for the memory management.


Uhm, do a search for SSD. I've got one, they work quite well. They have different sets of problems than HD's, but in general are quite a lot faster. They are also, at this time, more expensive.
See the new Mac airbook for an example of a computer with one.


And here's a link to a discussion of replacing hard drives with flash memory.

Note that the first PC to replace its hard drive completely with a solid-state drive was launched in June 2006. It seems that degradation is not considered a crippling problem here, as modern flash memory doesn't actually degrade much faster than a traditional hard disk.

Apart from being arrogant, rude, technically ill-informed and five years behind the times, is there anything else you'd like to bring to the discussion?


The DRAM resin fire was at Sumitomo Chemical, July 4th, 1993. It took out between 50% and 67% (depending on whom you ask) of the world's supply of a resin needed for making DRAM chips.

Memory prices didn't jump, so much as interrupt their normal exponential price drops -- they got stuck between $30/MB and $38/MB from 1992 until the beginning of 1996.


You mean, like snottily commenting that all memory degrades? Instead of commenting on the relative rates of degradation? Gee, that really adds to the conversation. In any event, I've had several people say that they've had problems with their USB drives in the past. And it was this commonality along with the failure of one of mine which lead me to track down the reasons for this.


By the way, looking at your own link:

There is also some concern that the finite number of P/E cycles of flash memory would render flash memory unable to support an operating system. This seems to be a decreasing issue as warranties on flash-based SSDs are approaching those of current hard drives.[26][27]

You were saying about being rude and antagonistic?


Patrick, thanks for the ref. Prices getting "stuck" is not how I, as a consumer trying to buy 2Mb of RAM to upgrade my 386 at the time, remember it! (Retail prices of around £50/Mb in 1990 zipped up to £100/Mb in 1992; gangs of thieves were targeting IT companies and stealing RAM, not PCs -- easier to carry and fence, much higher value per unit weight.)


I think the DRAM fire is a good example of how in some ways we are not quite as vulnerable as it seems. I read something once about how the ram constraint trickled through to min system specs and software was built to compensate for a couple years.

There is another very good single point of failure example from the auto industry, but I can't remember the details clearly. There was a strike at a part supplier for one of the big 3 US auto makers, I believe in the 90's. The part was small and simple but critical and not immediately replaceable and most of the car company and most of its suppliers ended up being shut down for, I believe, a couple weeks. I think the strikers got whatever it was they were negotiating for. :)


As a point of note, in the past four years I've had four USB sticks die on me, and one 128Gb FLASH SSD (luckily two months inside its warranty cover).

On the other hand, the dying USB sticks were fastened to my keyring and went everywhere in my pocket -- considerable scope for wear'n'tear. On the gripping hand, the last of them to go bad was one of these (albeit 16Gb, not 64Gb), so it probably wasn't killed by me sitting on it ...


SOP for RAM thieves around London circa 1992-94 was:

Procure a couple of under-12-year-old kids (below the age of legal responsibility -- can't be prosecuted as criminals) and arm them with screwdrivers and pre-paid envelopes.

Inject through a window on the ground floor of an office one Sunday evening.

Kids go room-to-room opening up PCs. At each PC, they remove most of the memory, leaving behind 1Mb. (That was enough to boot Windows 3.11.) Insert SIMs into pre-paid envelopes, and put in postal "out" trays.

At end of the session, kid waltzes out through the front door, past the building security guard, who can't legally touch them. They're not carrying stolen property, so even the police can't lift them; at worst they're trespassing (a civil offense). Possibly they're carrying a football and have a decoy story about how they accidentally kicked it through a window.

The next day, folks get in to work. Their computers are all sluggish. It's probably mid-morning by the time IT realize they've got a problem and a bit later on before anyone thinks to start opening PC cases, by which time the post has gone out ...


@scentofviolets: I'm not sure how my comment morphed into some kind of personal attack. I was merely pointing out, given your apparent incredulity toward SSDs replacing spinning disks, that the idea wasn't so far-fetched. Or how was I intended to interpret "!?!?!?! Surely that can't be right - flash memory degrades over time, yes?"


I have something of a professional interest in seeing how people come up with these numbers. The Wall Street Journal article reports:

Toshiba estimated that its shipments of NAND flash memory could decline by as much as 20% through February as a result of the outage. Based on the company's share of the market, such a reduction would translate into a 7.5% cut in world-wide shipments over that period, but a much smaller percentage for all of 2011, estimated Michael Yang, an analyst at the technology market research firm iSuppli.
What Yang seems to have done was to multiply Toshiba's estimated 20% drop by Toshiba's market share (35.4%). I like iSuppli's teardowns, but that's not worth a paragraph in the Wall Street Journal. That's an grade school story problem. Going further in the WSJ article:
Timothy Luke, an analyst at Barclays Capital, estimated the outage would reduce 2011 NAND flash chip supplies by 3% to 5%, possibly boosting prices to the benefit of rivals such as Samsung and Micron Technology Inc.
The derivation of this number isn't as transparent, but it suggests that Luke/Barclays thinks a) Samsung, Micron, and Hynix are already operating at capacity, and b) resolving the production cycle problem might take 3% / 7.5% * 12 = five to eight months, but also that c) NAND flash chips have a fairly high elasticity of demand -- there's enough flex in the system that a drop of 3 to 5% in production won't affect price much. (In other words, they're not like wheat or oil.)

At any rate, the San Jose Mercury News reports similar views from the Apple side. It doesn't look like the supply-chain-ocalypse to me. The world economy takes a dozen times bigger a hit every time there's an infrastructure attack in the oil regions of Nigeria (but that's a much less shiny news story).


Starting with the obvious, the primary driving force for SSD, aside from the "green" angle, is speed. The hard drive is an inescapable speed bump in most systems, including the fairly inexpensive PCs. Put a [good] SSD in and you get the sort of shot in the arm that makes you smile. If you read that it's less reliable than a HDD (on the statistical average), it's not really an issue - as long as it's *reliable enough*. Looks like a great time for a PSA, eh?! If you have data you do not want to loose, keep two backups of it. Yes, two! The percentage chance of recovering a file skyrockets when going from one to two. Thanks for listening, and have a wonderful day!

Besides - no moving parts! How cool is that?


Bellingham@16: It's going to be a bit of a bummer for the newspaper industry when they discover that all their editions have to come off the same presses in Ulan Bator, and they never get them to their customers the following morning.

Didn't Russia have a problem like this when the USSR broke up? If I recall correctly the Plan had decreed that there would be only one toilet paper factory for the whole Soviet Union. Unfortunately for caught short Russians, it was in Latvia...


Really depends a lot on how you define "vulnerable". No one is gonna die if flash memory supply gets disrupted.

I, in my entire life, have never witnessed a supply shortage of anything needed to sustain human life. I did have to wait quite a while to buy an ipad though.

Also, take for example New Orleans as a case study, an entire city was cut out of the grid for an extended timeframe, experienced a major hurricane and then a lake was dumped on it and basically no one died.

I am coming to the conclusion that providing the basic necessities in the developed world has become a complete no brainer. Nowdays it's all about the icing on the cake, not the cake


The people saying "the problem isn't the network, the problem is concentration of supply" are failing network theory big time.

It's the nature of scale-free networks to have a few highly interconnected nodes. Most failures are more-or-less invisible, but when one of the important nodes fails, you get problems like this one.


And did you buy high-end USB thumb drives, or lower-end models?

(I honestly have no idea what the high-end thumb drives are these days. I did have it explained to me why they won't be as fast as SSDs [in addition to higher-quality, and faster, parts, SSDs also use more parts, so they can use parallel I/O instead of sequential I/O], but that's about it for my knowledge. Oh, and SDXC has the potential to be fast and large, but still limited to USB2 speeds.)


I don't recal the price hike in the early 90's but i do recall something similar around 98-99? there was some talk of a fire/earthquake etc.. prices went from dropping to static and then actually began to rise..
after talking to a few Disties and supply chain people, I got the overall impression there was a slow burn of 'it's apparently scarce, lets add 5 percent to our own markup and blame a catastrophe in FarAway.. and clear our extant aging stock to boot' meme-ishness feel around the industry. Almost like they thought, 'the last time this happened, we missed a trick. time to capitalise..'
I wouldn't mind speculating there was some of that mental process driving the release to press of this story.. 'Lets add some perceptual value/scarcity as the Markets like that kind of scare story to stimulate trade in the market.' Or am i just too cynical?


basically no-one died ...

For values of "no-one" that approximate to 4081 dead (give or take: it's an estimate).

Paging Dr Pangloss, Dr Pangloss to the white courtesy phone ...


Sean: I was using high-end thumb drives. They just don't seem to be very robust, in my experience.



No-one at all on this discussion, including our gracious host seems to realise how appallingly vulnerable we are.

Ready to be scared?

All it will take is a repeat of the Carrington Event of 1859, or even one of the lesser storms of 1921(?) or 1960.
For information start: here or here
Telegraph wires, then, melted.
It isn’t just that we will lose, probably EVERY satellite we have, but power-distribution pines and services will crash, and computers connected to the mains may well burn out …….

What are we doing about it, and preparing?
As far as I can tell – nothing at all.


There are many different kinds of flash memory: NOR, single-level-cell NAND, and multi-level-cell NAND, optionally made at different lithographic process nodes, with different storage controllers, and with different error-correction schemes. Camera memory cards and thumb drives are usually made with the cheapest-per-bit MLC NAND and no heroic efforts for error correction or wear leveling, so they fail fastest. "Enterprise" SSDs are usually made with SLC NAND and have better controllers and error correction, for much higher reliability. But even within a single class products aren't fully commoditized; there can be major differences in endurance depending on how the details are implemented.

Google's 2007 study of hard drive failures in their data centers revealed interesting behavior that did not show up in previous small-scale studies, manufacturer guarantees, or theory. I hope that a similar study will eventually appear for SSDs so that we can rely on data instead of theory and speculation (at least until the next implementation change renders the conclusions invalid).


Greg, actually the Carrington Event is on the radar. New Scientist have been wibbling about it for a year or two now, and they didn't start doing so just because the historians started scratching their heads.

The question of whether we're prepared for it is another matter, of course. But just think of all the opportunities for reconstruction contracts! Why, it's just like disaster capitalism, without the need to bomb anybody to get the process started!


It is not enough to look a the flash technology, you also have to look at the controller, in fact the controller is much more important most of the time.

Roughly speaking, there are two grades of controllers: "camera grade" and "real".

Camera grade controllers are designed to write huge files to pre-erased space in a FAT filesystem, and will suck at anything else. For "suck" read: 10-20 second access times if multiple segments are open for write at the same time.

"real" controllers typically have a ARM7 and 32MB RAM or similar processor power.

In general you can trust anything with an SATA interface to be a "real" flash controller, with the notable exception of Transcend who happily puts a SATA->PATA bridge in front of a CF-camera controller and claims that the result works as greate in a computer as a camera, proving once again that if it is to good to be true, it probably isn't true.

Anything else, be it CF, USB, SD or whatever, is "camera grade" and should only be used for large sequential writes, or read-only access.

If you want to know the real story: Find M-Systems patents, now owned by Sandisk.



I agree the numbers posted here and elsewhere in the media are ... interesting.

Assuming the plant was operating at capacity, a 20% drop in production over 2 months works out to the plant being idle for approximately 12 days. Clearly something more then a single batch of product was affected by this sub-second power interruption. Perhaps the calibration of a critical component was thrown off and down time was needed to ensure everything was working correctly. More likely the plant was idled down to do a major overhaul of the electrical system to make sure this sort of thing never happens again.


I asked a friend who works for ST Microelectronics why an interruption so small can cause so much damage.

The answer is quite long and full of technicalities, the bottom line is that you loose control of the environment and the production cycle.

All chip factories are directly connected to power plants. SMALL chip factories require 12 MW of uninterrupted power. With such levels of power you cannot just use a big version of the UPS that you have under your desk, you have to relay on the efficiency of the power supplier.


Actually there are two failure modes associated with "just-in-time" supply chains that can cause serious shortages, and we've only talked about one of them so far: a loss of manufacturing (for values of manufacturing that include crop growth and harvesting and other product creation techniques) capability. But even if the loss of manufacturing is relatively short-term, and doesn't cause long-term shortages there can be catastrophic shortages further downstream if there is no inventory storage in the pipeline to smooth over the loss of production, and the product has a short shelf-life, or is an ingredient in other products that do. And just-in-time supply chains are designed with no inventory as a feature.


The first thumb drive I bought was a cheap 4Gb that I put on my keychain and filled up with the source and libraries I was using at work, so I could update the mirror I kept at home and work there (getting into the company network from outside was problematic, as the sysadmins kept blocking the incoming ports we'd set up when they refused to install a VPN). Unfortunately I had a tendency to leave my keys in my pants when I tossed them in the clothes hamper, and the thumb drive went through the wash several times. But it continued to work the first 3 times; the 4th time I think it finally succumbed to vibration and one of the chip bonds failed.

On the other hand, I have a number of SanDisk CF cards that I use in my DSLR camera; they've been functioning for 5 years now with no need to replace them. Of course I don't see a need to wash them. :-)


> For values of "no-one" that approximate to 4081 dead (give or take: it's an estimate).

When it comes to the number of deaths that usually result from famine or total infrastructure collapse? Yeah, it was effectively nothing. China's Great Leap Forward is an example of a self-inflicted loss of critical tools, and the result was 20 million (or more) dead. Easter Islanders suffered the permanent loss of a necessary component of their civilization, and the result was famine and cannibalism before finally stabilizing at 10% of its prior population. Quite a few others, the Maya and Anasazi come to mind, disappeared entirely.

Greenland around 1700 AD is noteworthy because it played host to two separate groups of humans with vastly different economies when the climate cooled. The Inuit was more or less hand-to-mouth and survived, while the Norse was heavily networked with continental Europe and failed when the shipping lanes closed.


As someone who worked in a chip fab (albeit a tiny and bizarrely specialized one) let me tell you about the failure modes. Nearly everything that happens to silicon wafer during processing happens either in a vacuum or at elevated temperature in an exotic doped atmosphere. These conditions are maintained by very large and expensive pumps and furnaces. When the controllers for these items encounter conditions they don't expect, they fail-safe by turning everything off. This is expensive in terms of the amount of product that must be tossed out, contaminated deposition chambers that must be hand-cleaned, etc. It's a lot cheaper than (a) melting down your $5,000,000 plasma-enhanced chemical vapor deposition system, (b) burning down your gigadollar fab, or (c) killing your entire staff with arsine gas.


When it comes to the number of deaths that usually result from famine or total infrastructure collapse? Yeah, it was effectively nothing.

You're moving the goal posts.

I've been in a city that took a direct hit from a typhoon while I was there -- Tokyo. An estimated 17 people died (most of them homeless) out of 30 million. Note that, like New Orleans, Tokyo is coastal and exposed -- it's not below sea level but it's barely single-digit metres above it.

New Orleans took a somewhat stronger typhoon hurricane hit, but lost over two orders of magnitude more dead. Something is very wrong there. Let's move the focus again and look at Bangladesh, and what happens when they get hit by a bad typhoon (for example, April 1991): it makes New Orleans look like, well, Tokyo.

But you know what? The developed world doesn't generally take casualties the way New Orleans did in 2006. Something went badly wrong there, and for you to approximate it to "zero" is breathtakingly cynical at best. (At worst, it bespeaks a certain attitude that I'm loath to accuse someone of without additional supporting evidence.)


Barry Lynn wrote a whole book about this back in 2005--End of the Line. It's not just chips--it's everything.


Who will put it back together if everybody's bank accounts are electronic only?


The estimate of 5k was not "give or take" it was probably much closer to a ceiling, and with pretty questionable methodology applied.

But yes Charlie, something did go really wrong in New Orleans, for whatever reason the response to the crisis was completely derailed for several days. It's almost a worse case scenario including the response portion.

The hypothesis though is that in the developed world at least, single isolated incidents don't really seem to do much in the grand scheme of megadeaths anymore, even when they are badly mishandled. The developed world at has a lot of built in redundancy in necessities of life that protects from blips in the supply chain or localized disasters

I imagine the reason why just-in-time manufacturing doesn't bite us in the ass more often is that even if the supply chain gets a bit wobbly and you cannot buy your Nintendo for a week no one really cares.

To really move the needle you need a disaster that is not localized, either something that is massive in and of itself or something smaller that triggers some kind of systemic failure across inter-related systems.

I think what you are trying to get to is something more of the systemic failure variety. I think to become plausible there it needs to link to something that actually kills people, food water, electricity, heat, etc. That is hard because the necessities are pretty well padded, they are not just-in-time they are heavily redundant. The failure needs to either be extremely severe or very long lasting, say global financial crisis anyone?


I have two reactions when I hear someone mention ~4000 dead people as effectively zero. First, the number probably looks a lot larger if you are one of the 4000 or a close relative. Second, possibly to some people 3000 or 4000 middle and upper middle class white people killed when a large airplane hits their place of work looks a lot larger than 4000 poor black people killed due to governmental incompetence and possibly malice.


Oh geez don't get hung up on the verbiage

Fine, I will rephrase.

A worse case scenario thing happened to a major city, knocking them off the grid for an extended period of time, and 99.8% of the population survived the immediate results. This argues against supply chain disruptions killing people in the developed world, unless they are systemic or prolonged


Too big to fail is only more obvious in the financial sector. It occurs in many other areas too. The risk cost of individual players becoming so big that their failure creates far-reaching negative impacts is not priced into any decision making anywhere. Perhaps this requires some other mechanism to handle.
Also, one of the disadvantages of competition is that it is not in the interest of any of the competitors to maintain spare capacity. It is in the interest of users that there be spare capacity,


Like I said on another thread, it aught to be possible to trace ripples through the supply chain from central Scotlands snowpocalypse last week, when we had 5 -maybe 10 inches of snow in the morning during rush hour, and nobody was sufficiently prepared for it.

It seems anyway to have caused the temporary suspension of internet shopping by famous names such as the big supermarkets, because of the backlog.
Now that isn't a massively big issue, but when you are in your local supermarket and there's no fresh food left, then there's a bit of a problem. We can get by for one or two days, but if anything that size hits a wider area we'd have real issues.

On power supplies, the place I used to work had a number of backup generators to power cooling water pumps if the mains went off. The mains power failed for various reasons which I won't go into, but we were drawing several megawatts on a normal days business. If the power went off, you might have a 2,000C furnace with no cooling water = melted furnace= big fire and lots of unpleasantness, ie hundreds of thousands of pounds of damage.
Unfortunately, due to management incompetence over the change in maintenance manger and the bringing in of lots of pals of the new manager and the lack of organised paperwork from the old maintenance staff, nobody had done anything with the generators for a few months...

That was the first time I'd actually seen an electrician run across the factory, as the generators didn't come on automatically when the power failed. Myself and 2 others decided we'd be safer at the new site, so we went and inspected progress there, having no emergency critical tasks to do and when there's no power your computer doesn't work.
It turned out lack of preventative maintenance meant the generators were low in diesel and water and thus didn't start.
Having an appreciable percentage of the world market for carbon fibre furnace insulation meant that despite a trail of disasters during my time there that meant we were permanently 2 or 3 months behind schedule, the company never went out of business. See the wonders of the market, as inefficient companies still survive...


You must compare apples to apples. Citing events > 100 years ago, or those that did not occur in the highly developed world (the U.S. is certainly in the top 5 developed countries, and N.O. is a major city not low on the list within the U.S.)

And so, comparing apples to apples, I can't think of too many events on a magnitude of 4000k deaths that is not considered a big fucking deal (f*** for emphasis, not ire or malice) and a whole hell of a lot of death. Prior to NO's Katrina, the last U.S. disaster I can remember on that scale was 9/11, and that certainly seemed like a lot of dead too.

Sure, it wasn't as bad as the 2004 Tsunami in India or this year's earthquake in Haiti, but again-- that's not apples to apples at all.


Oooh, the bomb manufacturers won't like that.


To be completely fair1 in comparisons, I'd argue that Louisiana in general, and New Orleans when Katrina hit in particular, were not really first-class parts of a First World country, but more like a backwater province kept in conditions not a whole lot better than some developing nations. And the US media reported the conditions in New Orleans with that sort of slant.

1. Even if I don't want to be fair to the weasels who caused most of the deaths and damage in New Orleans.


This is why we need to challenge the story of the global capitalists, who have sold us on the notion that bringing the entire world under their banner is the only route to material utopia. I get the impression that Mr. Stross is rather sympathetic to the global socialist alternative, but I would like to suggest that the most radical and timely critique of the modern world comes from the so-called "far right". Only the far rightists seem to understand that true diversity and security comes from having distinct cultures and civilizations, not from imposing a homogeneous global civilization that, if its foundations continue to prove faulty, will culminate in total global collapse.


The consensus (or perhaps the loudest) view is that there have been many instances of 'point failure' in our globalised supply-chain economy, and that all of them have been limited in their effects, so that we can safely conclude that there is not and can never be a 'point failure' that generates a cascade of failures in dependent systems that accelerates and expands to include vital infrastructure or non-restartable processes essential to the economy.

Me, I'm a pessimist. I know that we haven't found a 'schwerpunkt', a single point-of-failure that initiates a cascade that can halt a modern economy. I am gratified that we can demonstrate such resilience, but I don't conclude that we have proven that no such vulnerability exists.

It seems that we are looking very hard for one.

Perhaps the nihilists among us should look at the problem from another angle: identify non-restartable processes and ask "what would it take to stop them?" Given the resilience that nation-states have demonstrated against precision-bombing, even taking down the power grid and all the bridges isn't going to do it - and that's well beyond a single-point-of-failure. Apocalyptic, even, and close to solar-storm or exoatmospheric EMP effects.

A single shocking event that leads to an irreversible breakdown of popular trust in democracy might count ...If any mass media exist that would report it. But I suspect that tolerance of tyranny is a more likely outcome than revolutionary chaos.


Looking back to the stuff around @44.

I've talked about disaster stuff with a couple friends/family before. And if something like the Carrington Event happened on the scale of what @42 is talking about, we could be well and truly screwed, for keeps.

If something hit the earth as a whole hard enough to knock us back on our heels with power generation, and whatnot, it's entirely possible that we might not be able to recover, if the hit is good enough. Not unable to recover in a few weeks/months/years, but EVER.

The problem is, that all the easy to access, high density fuel sources are tapped out. Sure there's LOTS of oil, shale, gas, etc. But it's trapped deep. And takes a certain level of tech to access and develop. ~100 years ago, oil was bubbling out of the ground in PA and we could bootstrap off that to build the modern world.

If all we've got left is wood, we would likely never be able to get back to where we are. It's just not a dense enough source of energy to power all the really cool things that underpin modern life.


I've got to agree that pessimism may be warranted. Most of our systems have turned out to have failure statistics that obey power laws rather than Gaussian distributions, so our assertions that because we haven't seen massive single-point failures it must mean there aren't any are not credible. Certainly the economic meltdown in 2008 proved exactly that.


About 5 years ago Vernor Vinge was definitely thinking about whether CPUs could be a single point of failure.

Back in the 1960's the BBC's "Out of the Unknown" SF tv series had a story about a culture that had lost it's technology due to metal eating bacteria (no idea which short story it was based on). In a case of life imitating art, we now learn that metal eating bugs have been found on the Titanic...


I remember the RAM price spike quire well. In 1991 I has a chance to buy a job lot of 1Mb & 4Mb DIMMs for a ridiculously low price and took it. When I moved to Canada in 1992 the resale of those DIMMs supplemented my start-up income to an extent that made the investment very worth while. As did the 386DX-25 motherboard that I sold for $1000 used. Don't complain about modern hardware prices too much...


Re: @11 UPS Testing Usually Fails

In 1994, the company I worked for tested the vendor-installed 30kV UPS and found that it pushed 30kV (not mains-spec 120V) to the mainframe before the breakers popped, leaving the team in the dark, silent, data-scented smoke.

Since then I've tested every UPS installation I come across. I think that things have gotten a lot more simple, as most personal computing moved to mobile devices (sort of implicit-UPS, with their batteries and power management). But my brief encounter with larger-scale UPS systems convinced me that it's hard to get those things right. They usually fail.

(I recall the RAM-ocololypse of 1993+ all too well, the summer I built my first PC. Ouch.)

(Good comment @45 re: flash controllers. Who is this Poul Henning-Kamp anyway, and why does he know so much about computers?)


I think that our more or less global civilisation is getting more and more vulnerable to disruptions but I don't think that this fragility has much to do with economic globalisation and networked economies.
I think that the two culprits are managerial and political religions which place economic "efficiency" above everything else (in public and private sectors) on one hand and the spread of the jet-set to all the developed and developing countries of the world.
Because of the absurd cuts done everywhere in the name of right wing economic theories both companies and governments don't have the "slack" needed to handle emergencies of all kinds. This extreme leanness applies to human resources as well as physical installations. There's no redundancy left, where the neo-Thatcherites have come and gone.
At the same time the simple spread of affordable flights to a vast proportion of the humans on all continents means that diseases spread like never before, among humans but also among the bio products they bring along, wilfully or unwillingly. The old epidemiologists, the seasoned nurses, the experienced vets and botanists who would have shared their knowledge with the incoming replacements have been let off too early, put into retirement or fired. Sometimes the replacements haven't even been hired.
So, the fragility is caused by the cheap airline ticket and the 21st century efficiency obsessives/neoliberalists.


I'm not sure how that follows. A systemic failure does not imply a single point of failure. Financial contagion certainly isn't a caused by a single point of failure -- and in fact, modeling risk as if it were governed by single points of failure was responsible for crazy fun things like the Long-Term Capital Management debacle.

A mass of slightly sub-optimal decisions can create decidedly non-optimal outcomes. What's the single point of failure for a recession? What's the single point of failure for the whooping cough epidemic?

There are intermediate cases: effective prevention of a bad outcome is under the control of a comparatively small group of decision-makers who misunderstand a situation. That's a perennial.


Smile when you say that, damnyankee. Better yet, go to a bar well away from Bourbon Street and try saying it. I wonder if you'd make it to the door.

I had friends there and on the coast of Mississippi. They survived, but they're still recovering and rebuilding. So, yeah, this still is a bit raw for those of us that live in the region.


Just a personal pedantic nit, since I actually do research in digital storage systems:

Flash is common and useful in consumer devices like ipods (low-power, small), its much less common in server environments and larger scale scenarios for a variety of reasons and actually hasnt made much inroads at all.

The big ones are

1) wear leveling. Multi-level cells wear out after 10k or so writes to a cell, not a big deal for the personal user, sucks for larger groups.

2) cost per bit, its still too damn expensive to even come close to replacing primary in multi-terabyte to petabyte scale. It does have a place as a high speed cache, but its inroads to primary storage on big scale are probably gonna fizzle out.

3) in that Flash is reaching fundamental physical limits, were only going to see a doubling in density 1-2 more times and thats it for that particular technology. There are some interesting new technologies coming up that might take its place however, such as phase change memory.


The subject is resilience, and its still at a high level of high level interest a decade after 9/11.

Basically what people found was that the systems were so interconnected that if you knew where to push them you could have really big effects for very little effort. The nightmare scenario is to actually get an intelligent terrorist, skilled in pushing system level buttons. Then you'd be in for a world of pain you could do little to prevent.

Not only don't we have resilient systems, and all the while accountants have a say we never will have; often we have lost the people capable of creating what is now running. Remember Y2K and Cobol? There are whole areas of technology where that applies.

At a rough rule of thumb, you should never have more than 10% of your critical infrastructure / resources dependent on the same value chains - just to take into account the unseen connections. And if the accountant whines you're better off shooting them - less damaging in the long term.


Charlie @ 44
I don't read New Scientist any more.
Not after their ghastly take on Darwin over his 200th birthday, and the glee with which the Cretinists seized on it!

& @52
"Something went badly wrong there” (New Orleans)
Yes, it's called the Republican party - AND the US attitude that communal protection is somehow, you know socialist

@ 55
Yes a BIG solar Storm, like I said back at 42
( Which is known to be THE ANSWER, no? )

@ 64
Even a 1960—grade solar storm would cause major disruption, for a long time – say 60% of all satellites go down?
Never mind another 1859.

@ 66
It is like the astrobleme problem…
Up the size by an order of magnitude, downgrade probability by an order of magnitude, BUT risk of catastrophic failure/damage also goes up by (about) the same amount.
Now, are we prepared to pay the “insurance premiums”?
I think that is the question we are trying to answer.

@ 70
Arthur Wellesly, Duke of Wellington, spotted this one, long ago:
Commenting on military campaigning, he said of Boney’s French – “Their system is like a piece of magnificent horse-harness, beautifully fitted, and does its work very well – until something breaks. And then you are done for. Now, I made my campaigns of knotted rope, and if something broke, I tied a knot, and carried on.”

@ 74
True and scary.
If terrorists were "intelligent" as you suggest, they could make the 7/7 attacks in London, deeply unpleasant as they were (My wife was quite close enough to the Aldgate/Liverpool St bomb, thank-you) seem like pin-pricks.
All it would take is a little study of local history, and a careful look at a map or google/multimap "street view" to cause disruption that would shut our capital down for a week.
Is anyone doing anything about it?
Not as far as I know - and I have warned the "authorities".


The heavy snow in the UK was a demonstration of what can happen. Charlie and I were both in areas which were hit pretty hard. Not just the road condition, but it was cold enough for fuel filters in diesel engines to clog with wax.

I haven't heard any reports of food shortages, but I'm an old-fashioned rural sort of guy, and there's a lot of food in a big freezer. It still dragged down my stocks.

My personal opinion is that it didn't last long enough to make a big difference. The weekly big-shop pattern gives some resilience. But an event on the scale of 1963 or 1947 would really mess things up.


I've had three thumbdrives fail, not through obvious rough use. In one case, I was able to get at the data after re-soldering the connection from the USB connector to the PCB. I suspect that may be a weak point in the design, generally. I'm not an expert in mass-production tech, SMDs, wave soldering, and such, but that connector assembly has a hugely greater thermal mass than the chips.

There have been a few thumb drives which have an ultra-small form-factor, using a single PCB to mount the chips and the USB contacts. PQI used to make them. but they seem to have vanished. Just an example in this huge image file. I wouldn't be surprised if it turned out to be no more robust, overall.


One example of the unanticipated costs of efficiency.

The British pottery industry used to ship a sizable fraction of its output by canal. The barges would take several days to deliver to London, or one of the other major cities on the canal network.

Oh dear, said the new-style managers. It will be so much quicker to send our goods by the new motorways. Next day delivery, so much more efficient.

They had more breakages in transit, and needed to buy warehouse space to replace the barge space they were using to store a week's production. Pottery-making is essentially a batch process, however much you might do to the processes before you fire the kiln. (And Josiah Wedgewood, for one, was pioneering the mass-poduction techniques.)


#Various ref memory degradation:-

Which value of "memory degrades over time" are we using?

Is it:-
1) "stable media have a limited number of read-write cycles"
2) "magnetic and flash memory become less easy to read over long periods in unused storage, so need periodic re-writing"
3) "The real vulnerability with USB flash stick isn't the flash ram itself, but the point where the USB connector is attached to the "system board" in the stick can easily be broken by even slightly clumsy handling"
4) "something else"?

#Various ref the New Orleans death toll - How many of the deaths were attributable to trauma and/or entrapment from the hurricane and/or flooding (and delays in civil engineering rescue) [which I'd ascribe to poor civil defence reaction], and how many to disease, starvation, thirst, disruption to medical services [which I'd ascribe to supply chain disruptions]? I'm not trolling, unless you feel like arguing that being hit on the head by a 5 storey building is really the same as dying from a lack of power for a dialysis machine.


#77 para 1 - Yeah Dave, that's very much my point in #79 Part1. We used to have one member of staff who broke a USB flash stick every few months, normally when getting them in and out of ports, hence bullet (3). That's 2 of us who have that experience as the main failure mode of USB flash so far.

I can't think of too many events on a magnitude of [4000] deaths that is not considered a big fucking deal (f*** for emphasis, not ire or malice) and a whole hell of a lot of death. Prior to NO's Katrina, the last U.S. disaster I can remember on that scale was 9/11, and that certainly seemed like a lot of dead too.

A typical month's road traffic accidents?


History will remember Poul Henning-Kamp as the leader of the MD5, the seminal proto-punk group that introduced smoking cryptographic hash and debuted their immortal hit "Kick out the Jams" at Woodstock Europa.


"In a case of life imitating art, we now learn that metal eating bugs have been found on the Titanic..."

There are also known to be glass eating bugs on the ocean floor. (Some geologist looked at the rate that volcanoes were making glass down there and wondered why the entire world wasn't made of it..)

They'd probably make a mess of human society if they escaped up here as well. Generally we're reasonably safe from them because they're specialised to live in that environment and would fare poorly against pretty common bugs up here.


"The old epidemiologists, the seasoned nurses, the experienced vets and botanists who would have shared their knowledge with the incoming replacements have been let off too early, put into retirement or fired. Sometimes the replacements haven't even been hired."

My Dad finally retired in one of the last groups of engineers to leave that particular part of the company, without training replacements despite their suggesting it. He said "it's a good job the world doesn't need another generation of big jet engines after the current lot, 'cos they won't get one."


I seem to recall that the last flu epidemic strained the NHS to breaking point. A focus on cutting beds to save money when you have an ageing population is insane, but labour and now the tories want to privatise it anyway. Once it is safely privatised we'll get the same service at twice the cost and guaranteed less resilience.


Indeed. It would appear to be (3), something which doesn't apply to SSDs.

Failing 'cells' is an already known problem - it is not new to SSDs. It's a problem that hard drive manufacturers have already had to address. In their case, it's a sector going bad rather than a flash cell going bad, but the principle for addressing it is still the same - you use hamming codes and the like (go Google on Galois fields and RAID for one particular approach - note that Galois field encoding is also used in QR Codes and Datamatrix symbols, in the guise of Reed Solomon encoding).

When it comes down to anything containing of the order of 10^12 bits of data, you must assume that some will be unreliable. Disc sectors are not perfect - never have been, never will be, except possibly temporarily when manufacturers seemed to be attempting to charge nice premiums for zero-defect megabyte-order drives.

So you store data in such a way that loss of any particular bit can be detected and repaired. And you keep a bit of spare capacity for when you decide a sector/cell is irreparably bad and has to be replaced.

For the case of write cycles, you make sure that you're not always hitting the same cells. When it comes to a terabit of data, most of that is effectively unchanging. If some part is getting hammered, swap it with something that's not.

(I managed to break my RAID 5 array the other week - I forgot to properly reattach the cable for the third drive after installing a new gaphics card. The volume rebuild took the best part of a week, and that's a 4 TB array.)

The above requires intelligence. That's why the drive controllers tend to contain quite powerful processors.


"Twinkie, Deconstructed", by Ettinger. He was surprised by how few plants made modern food additives like lecithin, etc.


If we lose the power grid long enough in July in the US South, most of the wrinklies will die. So will some smaller children.
Europe lost 10K plus, probably more like 50K plus, in a heat wave and that was with a functioning power grid.


Re: 75:

& @52
"Something went badly wrong there” (New Orleans)

Yes, it's called the Republican party - AND the US attitude that communal protection is somehow, you know socialist

NO was terrible. And many things went terribly wrong. But blaming the R's for everything is a fake blame. They were just the last ones standing in the game of musical chairs.

But yes the R's did fail. Bush in particular. He should have gone public and said he was going to break the law and send in federal troops with supplies and dared Congress to impeach him. But instead he followed the law and stayed out until the LA Gov asked him to come in. She wanted folks to leave and not be supported in place and stuck to that plan for 2 or 3 days. And by law the feds could not go in until she changed her mind.

As to the history. As someone who grew up next to the Mississippi and Ohio rivers plus some other big but not so big relative to those rivers, the management of the rivers in the US has nothing to do with engineering except to take the directions and money supplied by the politicians and do what they can as best they can. (Engineers say they need $10 to fix issues. Pols give them $5 and say it's enough to fix it and shut up.) But as long as the US is ignoring the facts of how rivers evolve NO will always be a disaster waiting to happen.

There are better articles but I can't find them just now.

Maintaining a city below sea level 100 miles from the sea when the city is sinking and the river filling up with sediment from 1/3 of North America by building higher walls along the river and bigger and bigger pumps is a long term fail.


SoV: I'm reluctant to get into this because OGH apparently tolerates your behaviour, but for as long as I've been reading this blog you've been derailing otherwise-interesting conversations into personal insults, accusations of bad faith and metawankery (insisting that everyone involved follow your own invented rules of debate or you won't talk to them, for instance). This is a great shame, because you have some interesting things to say, and I'd love to be able to read them without the snark and aggro. Perhaps you could try to assume good faith in future? For instance, in your post #15, you could have said "Yes, I know that, but Flash degrades significantly faster, right?" instead of rudely criticising the quality of abhelton's contribution.

[Yes, I'm aware that this post is precisely the kind of metawankery that I'm criticising in SoV. Sorry about that, everyone.]


One major difference; most Europeans don't live in buildings with central aircon.


Sounds to me like the manufacturing process for this product is broken. After all, which manufacturer would design a process in which a minute error takes out a fifth of your monthly yield? (A (pseudo) monopolist, that's who---those, and the evil people who use parentheses within parentheses.)


Hello? Most of us don't live in buildings with aircon at all. Here in the UK, the last figure I saw was that around 2% of dwellings have aircon. (Central heating, in contrast, is probably closer to 80%.) Admittedly we're on the low side by general EU standards, but even in the southern countries aircon isn't ubiquitous.


I used "central ac" because #89 refers to "the US South", and I know some people who live in the desert states and in California, where that is more of a fact of life than central heating (regardless of type) is in Scotland! I'd think you're probably right about just how many Britons don't even think about single room ac as a luxury for a dwelling though.


@29, That would have been GM, and it was a couple years after I quit working for them. The company had gone to "just in time" because that is what the Japanese manufacturers did, and anything Japanese became a business fad for US manufacturers in the late 80s. The limiting component were engine computers made by the division I worked for. Because the cars cannot run without the engine computer (among other things, it controls the spark plugs and advances/retards timing) and because all the divisions were working on JIT as well, a strike at a single facility was able to shut down all of General Motors plants.

The Japanese manufacturers have a much better labo(u)r-management relationship that is nowhere near as adversarial as the US experience has been.

Also making things difficult, GM was trying to spin off the parts divisions to a new company called "Delphi" which only made the labo(u)r-mismanagement problems worse. Ford's spin off was called Visteon. Chrysler had spun off their component manufacturing divisions during the Iaccoca-era bailout.


I'm under the impression that in GENERAL the temperatures in Europe tend towards a narrower range in any particular area. Here in the US it can easily be in the 90's (F) in Chicago in the summer and below freezing for long stretches of winter. (I've never felt cold like in Chicago at 20F with a strong wind off the lake. Freeze the words as they come out of your mouth.) And in the southeastern US we're looking at relative humidities of 80%+ when then temp is also above 80F. Until central air this area just didn't developer. Or Dallas with it over 100F for a month at a time.

Personally I can take the heat after growing up where August had many weeks of 95F and 95% H at 3 AM. But my family has threatened to leave at times if we didn't turn the air on in the summer. :)


In general, yes.

In Chicago, you have a Continental climate, due to the lack of nearby water (a piddling little puddle like your lake doesn't really count). It tends to extremes.

Western Europe, by contrast, has what is known as an Oceanic climate. We have an ocean to our west, and that tempers the extremes somewhat.

Go to Eastern Europe, or the US west of the Rockies, and you could reverse your generality.


I was referring to the issue of why Western Europe, especially in the north, seems to have a very low penetration of central air compared to much of the US. Yes, west of the Rockies has a very different type of weather than east of said mountains.

And of course the power rates in the US over the last 60 years compared to Europe have played somewhat of a role. Ditto the issue of retrofitting a 300 year old building with 1' think masonry walls in Europe vs. new construction in the US.


Well said, and much more politely than I could manage



The buildings you get in the different areas have evolved over centuries for the local climate. That may mean thick masonry walls, but on the other hand, those very walls slow down heat transfers and obviate much of the need for A/C in many places.

Looking at the traditional hot climate building styles, it seems to involve not letting heat and light get down to street level in the first place. Lots of white paint, high walls, narrow streets. Probably the exact opposite of a Floridian suburb.


Reconstruction contracts after something like the Carrington event? About as useful as applying band-aids to shotgun wounds. Electricity is everything in today's society. Very few jobs don't need it.

If something like the Carrington event happened, it wouldn't be about 'reconstruction'. Maybe certain enlightened regions will have measures in place to shut down the grid before the solar storm hits, but most places won't, especially not places like the US where they skimped on maintenance and thus had blackouts(

With a fair possibility of parts of the developed world being without electricity for months or years, that'd be total chaos and extreme economic disruption that'd make the present economic crisis look like a golden age.


One way of thinking about this problem is that if you are so stupid as to build a factory that destroys millions of dollars of your stock if the power is interrupted for less than a second, you frankly deserve some sort of prize for crap planning, not worried handwaving upon the part of everyone else about how "interconnected" the world now is. The rest of us inhabit reality and know it isn't always that well run.

Result: People might stop lobbing electronics away as if they are yesterday's fashion accessories. You'll be a bit more careful if it's worth a lot to replace.

Why don't they own their own power generation facilities if the situation is that sensitive? Insurance against lost revenue now more than justifies this.


Dude, they do own their own power generation facilities -- that's what backup generators and UPSs are for.

Unfortunately backup generators don't always kick in when they're supposed to.


@ 91:

You're right. This is metawank. But let me ask you something: were you aware beforehand that, "All memory degrades over time. Hard disks with spinning platters eventually break . . ." A simple question, and a yes or no will suffice. Now, is this general knowledge something of the norm rather than the exception? I'd say the former rather than the latter, but maybe that's just me. That being said, what did that particular post add to the discussion? I'll admit to being an old man and cranky at times, especially at the end the semester when I hear the usual round of excuses for why someone should get a C- as opposed to a D+. But to my mind posts like the above, or posts wherein it's demanded that claims be proved wrong else the default assumption is that they're right, etc are the ones that are insulting and dismissive.

Iow, if I ask why humans are warm-blooded, don't insult me by telling me it's because they're mammals. If I ask for proof that raising the minimum wage causes unemployment to rise, don't demand that I prove that it doesn't or else accept the claim as true. That sort of thing. Bear in mind that among other things I teach mathematical proofs for a living and it's something of a professional requirement that I explain why certain explanations are kosher and why others are not.

That being said, you're right. 'Tis the season and all that. I probably should have gone with the assumption that abhelton didn't know how trite he was being and should have been kinder in pointing it out. What should I have said instead? Specifically. Since this is metawank, you might want to reply via email rather than making a public post.


The problem is testing the failover system -- if you test it properly and it doesn't work then you suffer the conseuqences of an actual breakdown, including that 70-milisecond interruption that headlined this piece. That's sort of expensive in its own right, and it was self-inflicted and worse still the testing won't necessarily stop it from happening again.

In the other leg of the Trousers of Time you tested the failover process and it worked! Great! Now you are totally 100% confident that if a real glitch occurs it will work perfectly. Won't it? Maybe you should test it a few more times just to be sure...


I liken the Carrington Event to Y2K. While Y2K was a general bust, the economic impact of all those companies spending time on development, money on upgrading equipment and software, and hiring new employees to do both can account for a significant percentage of economic growth a the end of the last century. Best part was that they had to do it or risk huge lawsuits from stakeholders if things actually did go wrong. Can you imagine the lawsuits (assuming civilization survives) against companies that new a Carrington-like event was coming and failed to prepare adequately for it. Should be just the sort of economic stimulus that would finally get banks and companies to spend the cash hordes they're sitting on. Now we just need to hire a PR firm to stir up the hornets nest.


@ 99:

I was referring to the issue of why Western Europe, especially in the north, seems to have a very low penetration of central air compared to much of the US. Yes, west of the Rockies has a very different type of weather than east of said mountains.

Very true. No need for AC or even fans really in southwest Oregon. We didn't have central heat either; but fireplaces or some sort of wood stove was the bare minimum for getting by in the winter.


John, Y2K was not a bust; it was the most successful emergency response in IT history. The reason nothing happened was because a lot of folks put in a lot of skull-sweat and overtime making sure that nothing would happen. Without the remediation effort, there would have been hell to pay on 1/1/1.

Ditto the Carrington event; if power and telcos spend money hardening against it, then there's a major coronal mass ejection and we handle it okay, the natural response of the shareholders won't be "gosh, you saved our business, have a token of our gratitude!" -- it'll be "why did we bother spending all that money preparing for a non-event?"


I just sent a reasonably long and terribly earnest email to the address listed on your LJ profile, but it bounced: where should I send it instead?


That's odd; it works for me and other people are getting in (although this looks like it's all local mail so far.) Try: instead; Rich could be messing with the message servers over break. Sorry for the public posting; I can't email you privately, obviously.


@103: sometimes it's cheaper to *not* prepare for such a situation and just clear up afterwards. I work for a company moulding plastic parts using more energy than the combined (small) city it's in. No, we don't have UPS or diesels for electric energy delivery. Would cost way more than just accept that sometimes the machines *will* stop working due to some sort of blackout.
On the other hand a blackout needs to be several minutes before the situation becomes "bad" (meaning the heated plastic solidifies in machine parts not prepared for it) and recovery even then is a matter of a few hours.
Sure, in this case of the millisecond blackout they had to throw away some days worth (most of it probably would be salvageable - but who wants to pay for the tests on such low-value parts?) but even then if the frequency of such occurences is low enough it may be worth *not* to invest in 99.9999999% perfect UPS on-site but maybe only 50% reliability and just accept the others. What's the worst that can happen? A few million parts not produced, perhaps a little bit higher prices for the rest, nothing really world-shaking.
Here in germany (and probably everywhere else) large enough companies and the state (down to town level, IIRC) for the same reason don't *have* to have fire insurance. The reason is that the chance of simultanuously burning down of enough buildings that the rebuild cannot be financed is small enough that paying insurance premiums is not worth it (or the insurance company would not be able to pay in that case, anyway).
The other side of the coin are nuclear reactors and such which are specifically built with quadruple or more redundancy in terms of emergency power supply (even different ways "in" for external power) because a catastrophic breakdown simply would be unacceptable.

As to the NO debate: as bad as the cost of lives were the problem discussed here was: is it in todays age still possible that a catastrophic event would wipe "us" out. NO showed us essentially that you could take an entire large city of a heavily connected part of the world out of the map - and the system still holds and has way enough reserves to (slowly, and only if it wants) rebuild that region. Mankind is - in a way - "too big to fail". *If* the catastrophes are local enough.
I predict that even the loss of 90% of our satellites would not end "us" as civilisation. Communication nowadays mostly runs earth-based (undersea cables), so that wouldn't be stopped. Diminished, true. More costly, perhaps. But except of satellite TV and military communications (which are probably more hardened anyway or could forcefully acquire the remaining satellite connections) what really depends on satellites? Oh yes, Weather forecast :) But mostly this would be icing of the cake.
Total loss of electric power for a continent or two over more than a few days on the other hand....

@the USB stick problem: german computer magazine c't in 2006 or 2007 tried to actively induce wear in an USB stick by writing the same data over and over to it. 16 Million times. They found not a single error.
Electrostatic discharge onto the contacts by your fingers or mechanical stress seem much more likely reasons why those sticks fail. And as probably every one of us I already have machine washed more than one stick more than once. When dried they work perfectly afterwards. Now if they only wouldn't need a mechanical connector...



"If something hit the earth as a whole hard enough to knock us back on our heels with power generation, and whatnot, it's entirely possible that we might not be able to recover, if the hit is good enough. Not unable to recover in a few weeks/months/years, but EVER."

This is all from unreliable memory, I wish I'd kept a copy and can't find one on Google Groups, but back when Usenet was still usable (15-20 years ago) one of the news groups (a rec.sf.* one probably) discussed the problem of restarting the UK's electricity grid if it was all knocked out by some Wyndhamish disaster. (Quick background summary for those not familiar: starting up a power station requires power on the grid to bootstrap it these days.)

The answer was that a large ocean going liner can generate ~250MW of electricity, which would be enough to fire up a larger land based power station, which could fire up more, etc. (There was a lot more interesting detail on how the liner's huge diesels get bootstrapped, but that's not relevant here.) That would get this island powered, so gas systems and the like could be used. This could all be used to bootstrap across Europe, and the same approach could be used on other continents. That should bring mining, oil and gas extraction back on line.

OK, this isn't going to handle a dinosaur killer asteroid that trashes everything, or a plague that wipes out 99% of the population, but we are pretty good at surviving.


Arthur, bootstrapping an AGR nuke plant may not need external juice. It *will* need a polonium initiator to kick off the reaction, but the AGR plant at Torness that I visited had four sets of paired 10Mw diesel backup generators. Any two generators could pump out enough juice to get the coolant circulating and bring the reactor up to power (although to reach full power would be a multi-day process -- not mere hours). And I assume that if the grid was reconnected carefully so as not to overload the reactor, one reactor complex could be used to help bring the others back online.

And then there are the fast response gas turbine generators, and things like the pumped hydropower storage reservoir in Wales. Those should be good for a few gigawatts, with which to start bringing the rest of the national grid back online.


Google's 2007 study of hard drive failures in their data centers revealed interesting behavior that did not show up in previous small-scale studies, manufacturer guarantees, or theory. I hope that a similar study will eventually appear for SSDs so that we can rely on data instead of theory and speculation (at least until the next implementation change renders the conclusions invalid).

That study has started to be done; various researchers around the country, some of them at UC Santa Cruz CS department. Papers at the EASY, HotDep workshops and FAST conferences, among others. Vendors not cooperating because it might make them look bad, etc. I've been at several of the workshops and conferences and talked to multiple authors and researchers about the problem.

I don't buy SSDs as a result. I would for a lifetime-limited cache function, where replacing the worn out drives is OK and corruption when it happens isn't going to lose the underlying data. I'd be OK with RAIDed SSDs in a laptop (hah, like there's ever enough drive bay space to do that). But I can't afford that.

In a few more years? Should be a lot better, even with MLC FLASH. The controllers are getting more aggressive at error handling, though wear-out is still a life limiting factor.

Also, phase change memory, memristors, magnetoresistive RAM, etc. All of these technologies give much longer lifetimes (write cycles like hard disk limits, rather than FLASH limits, etc). The future looks bright, question is whether it's 5 years out from having to keep worried about reliability here a lot, or 2, or 10. I am not sure which bucket we're in, and I follow the research here as closely as anyone I know on the systems side of things...


Most people in the US have at least small window units. Those who don't are frequently in urban poverty areas. I have a friend in Arizona who has a common kind of air conditioning down there: a swamp cooler.


According to the internet, Dinorwig (The pumped storage station in Wales) can do 1.8 gigawatts or so for a few hours, and Cruachan near Oban can do around 440 megawatts. Apparently the total Scottish capacity is 1.3 gigawatts, and I imagine would last longer than Dinorwig.

I'm rather more concerned about the coming energy crunch in the UK which may well turn out to be a good chance for the usual suspects to hoover up lots of public funding and still make a profit. But that isn't exactly a resilience issue, unless it leads to out power supply becoming so narrow based or inflexible that a single major error can cause problems.
Like when Longannets coal conveyor broke down and a gigawatt of electricy generation had to be shut down. Or when Torness cooling fans broke (IIRC due to radiation induced embrittlement) and it was shut down for a while, I think it was also down for a month or two when lightning hit something. (nuclear power is not as perfect as some people think)


Saying that human beings are warm-blooded because they're mammals is a perfectly good answer. It's a shared ancestral trait for the clade.

The question is, "Why are mammals warm-blooded?"

Much more interesting.


Charlie, I'd want to look at how the actual alternators start up. It's not something based on a permanent magnet: it needs, from what I recall, current through a set of electromagnets. That's how the alternator warning light is triggered--it's the excitation current. Once the alternator is running the current comes through the regulator.

Another possible back-up power source would be a diesel-electric railway locomotive, but frequency might be a problem.


I remember seeing a calculation on the life of an SSD, based on the size of the drive and the bandwidth of the connection to the rest of the computer. It didn't look so bad. But I have since seen more detailed descriptions of how flash memory works, and apparently a single software/bus-level write can need more than three writes to the flash hardware cells, and that makes a big difference to the life.


I believe the reason that telegraph wires melted during that event was that they were in active use. With current technologies examining the sun and its solar weather, we can get a minimum of several hours warning of a bad solar storm that will hit the Earth. Solar models can sometimes predict events a couple days in advance. With advanced warning, it is possible to power down the grid and put satellites in stand-by mode. While the solar storms can and will still do damage (especially to third-world nations who don't receive or heed the warnings), developed nations would be able to weather the solar storm without significant loss of infrastructure and with systems set up to repair what damage is incurred.

Rob H.


Getting the entire grid back online isn't as time-sensitive as getting some segments back online.

If you can keep your water and sewer going, then you reduce your public health risk a LOT (look at Haiti today) and reduce panic among the genpop.

Does your water company have a backup plan? Is your water or sewer gravity fed, and if not can they secure the remote generators that would be needed at the distribution pumps? While you're ragging on a $1b+ for inadequate preparation, how much water do you have in your house?

Check out for general disaster preparedness. It's got a ton of info whether your flavor of disaster is a solar flare, ice storm, pandemic, or even zombies, I guess. (Silly? Perhaps, but it's a PR slight of hand so people won't think you're a member of some eliminationist militia. Plus, it helps avoid flame wars about whether a New Madrid redux would be more likely or damaging than collapse of the monoculture food system.)



I recently read Patterson's biography of Robert Heinlein
that something like what you described happened in the early 1930's to one of the large cities in the Pacific North West of the US. From what I recall, the power stations were not working, but an aircraft carrier was used instead of an ocean liner. Sorry, don't have the book with me so I can't give you exact details.


Strath-Tummel can do some 245MW for days at a stretch ( ), and I used to work for the then NoSHEB, in their research lab, so got "all over" from a base in Pitlochry.

Loch Sloy ( - Warning, needs decent broadband) can "only" do 520kW, but can keep that up for 40 days in an absolute drought (ignoring evaporation losses), and is specifically designed so that the turbines can be "crash-started", going from "no water flow" to full power in under 5 minutes.


#121 - Dave, reasoning with references in moderation queue, but Dinorwc (qv), Cruachan (qv), and some of the other Scottish hydro plants are designed to "crash start", without receiving a loading from the grid to get them going.


Various factual information:

1. The american south were essentially unpopulated until the invention of air-condition. One of Alistair Cookes better "Letter from America" tells this tale.

2. With respect to geomagnetic storms, the location of the magnetic north pole has a lot influence on who gets hit how badly.

Last time we saw serious trouble was 1989 and one of the things we learned was that Mississipi is special: The river runs along a fault-line which contains a lot of quartz, making it a pretty good insulator with a tendency to piezoelectricity.

As a result of this, more physical damage happened where people had strung electrical wires, such as telecoms, power or steel bridges across the river.

This discovery also explained a lot of the anecdotal evidence from 1859.

These days the MNP is on its way to Siberia, quite fast actually, and therefore for the next interval of time, Rusland will get special solar attention, and next up will be Europe.


(Appearantly member of a "seminal proto-punk group" :-)


The bit to pay attention to is the long pipeline in chip fabs: Its 2 months from start to finish.

This means emptying the pipeline to do a test takes two months ...

So, you may have adequately tested that generator and UPS when you fired up the fab, but when was that? Its not like you can do the test every monday to make sure that nothing has worn out.


The American south-west, which is dry as well as hot, was unpopulated until the mid 20th century. The south, meaning the south-east from Baltimore to Atlanta to New Orleans, has some of the oldest cities in the US. The summers were always deeply unpleasant, but between slave agriculture, the enormous cargo trade on the Mississippi, and a ready water supply, it was populated.


I have seen a belt-and-braces continuous-power backup system although it too had to have downtime occasionally for maintenance in which case they switched to using grid power directly while the kit was being fettled and crossed their fingers...

The incoming mains was converted to DC using mercury gas rectifier tubes (this was a while back) and floated across a large bank of batteries. The battery bank fed a DC motor which drove an AC three-phase generator set on the other side of a large flywheel. The generator supplied the computer system in question. A diesel engine was also coupled in-line to the generator shaft via a clutch held open by an electromagetic linkage powered by the incoming mains feed.

If the mains power failed the clutch released and the DC motor and flywheel plus the float battery kept everything running for a few seconds as the diesel engine spun up and took over the load. If it failed to start the battery float would keep the motor-generator set running for a few minutes while the staff on site tried to get things working again. A 70-millisecond glitch might have been recorded on the paper trace from the chart recorders but it wouldn't have caused this system to even blink.


I didn't say nobody lived there, I said "essentially unpopulated".

Look at the US census if you don't belive me.

@Alastair McKinstry:

A typical test-schedule for a serious UPS owner is something like: Second tuesday every month: five second mains-break to test UPS. Last tuesday every month, run one hour on generator.

Serious installations have two UPS in series and a generator, so that any one component can be tested without ever trusting mains.

One interesting tidbit: More and more battery-backed devices, from laptops to UPS' makes it harder to raise the net after a blackout.

A business district that fell out a X MW load, will typically come in at 1.4X MW load and within seconds increase that to 3X MW load as everybody starts charging their batteries.



The only other thing you can do is have separated "crash start" gennies, accumulators and invertors to handle switching times, and a test load bank big enough to handle the gennies' peak output safely.



OGH said "bootstrapping an AGR nuke plant may not need external juice. It *will* need a polonium initiator to kick off the reaction"

I'm amazed, I thought initiators were only used to ensure weapon-mode nukes went "bang!" rather than "fizzle".

I don't keep count, but I'm pretty sure I've never read any post and subsequent discussion on this blog that didn't teach me something I didn't know.


To which I reply, sigh. This is mischief-making, pure and simple.


I asked one of the engineers who ran it just what a cold start entails, and got a half-hour lecture. It's a surprisingly intricate process, involving shipping a short-halflife "spark plug" from Harwell or Sellafield or somewhere, firing up the diesel generators to get the primary coolant circulating, and slowly bringing the reactor up to criticality -- it doesn't run on highly enriched uranium and a cold pile can't go critical on its own (probably another safety feature). If I remember properly it takes something like 48-72 hours to bring it up to the point of producing surplus power, and a week or so to stabilize it and check everything's running smoothly before it can start driving the grid.


I'm inclined to agree. Lars, consider your wrist slapped.


It wasn't intended as such and I apologize if it seemed that way.

Not to belabour an OT seam - the question of why humans are warm-blooded is not a simple one, which was your point, but in systematics, homeothermy is simply a shared derived character - "because they're mammals" is an acceptable explanation in that context.

I've been teaching this recently, and teaching tends to shape your thinking, as you pointed out.


Looking at the description on Wikipedia of the Urchin nuclear detonator (which maybe a bit too detailed), it requires about 1 curie of Polonium 210 for each 2.5M neutrons per second. (Plus excess Be9 and a bit of gold plating to keep the alphas away from the Be until explosion.) The original Urchin used 50 curies (11mg).

Now, the neutron deficit is quite small on a percentage basis in any workable reactor, but on an absolute scale it must need a lot more than a bomb pit just a few centimeters across. And making this neutron source involves mixing the Polonium with molten Beryllium at thousands of degrees. Any recipe that calls for mixing a few hundred curies of Po-210 with molten Beryllium I'd prefer to observe from a safe distance, say about two counties away.


> The American south-west, which is dry as well as hot, was unpopulated until the mid 20th century.

Oh, dear. I have to conclude that I don't exist, as my ancestors, as well as I, were putatively born there (El Paso- southern New Mexico - southern Arizona) well before 1950.



> I have a friend in Arizona who has a common kind of air conditioning down there: a swamp cooler.

Indeed, growing up in Warren, Arizona in the 1950s, our house was cooled by a swamp box and I never remember any problem with excessive heat. In addition, my father's 1939 Ford(*) had a cylindrical swamp cooler that hung out the window into the airstream and blew cool(er) moist air into the car. Also quite satisfactory -- we drove across the desert to San Diego more than once with it.

(*) I still kick myself for selling that car when I inherited it in the 1960s.


> And making this neutron source involves mixing the Polonium with molten Beryllium at thousands of degrees.

Doesn't need to. The important reaction is (alpha,neutron), which means that a suitable nucleus (beryllium) absorbs an alpha particle (emitted by polonium) and spits out a neutron. Temperature has little to do with it.

Actually, modern US weapons don't use (a,n) initiators at all, but miniature particle accelerators that produce neutrons via DT fusion. If I were making a DIY bomb, I'd certainly go with the (a,n) initiator because of its simplicity, but there are other ways to do it. And, for a gun-assembly uranium weapon, you don't really need an initiator at all if you're willing to put up with some uncertainty in the yield.


No, you misread my posting. Why aren't they generating power as their basic mode of operation? It is not due to economies of scale: There are quite small examples where this does happen, and not just massive facilities like nuclear stations, etc, etc.

Hospitals actually do this all of the time in some locations, as they use the hot water generated by diesel engines and boilers to actually heat the building: Combined heat and power, and a safety critical situation in the form of the operating theatres and ICU facilities that demand it. Kirklees council in Huddersfield got in trouble for selling surplus hospital electricity to the national grid to supplement it's income on one occasion. It is not what the equipment is there for after all, and you're wearing it out by doing it.

(Someone spotted that, but the people in charge evidently hadn't. This is the British we're talking about, after all.)


I was talking about the process for making the initiators, which I found online in a DOE document. The Po in melted into the Be to form an alloy. It is only a little above ambient when in use. (0.1W/ 50 curies/alpha according to Wikipedia). Other methods to get the alpha emitter into intimate contact with the Be should work, such as plating onto Be powder. Wikipedia also says that other isotopes such as Americium-241 and Plutonium-238 are the more modern alternatives to Polonium. Californium all by itself gives over 10^12 neutrons/s/g, but it is expensive.

I found a manufacturer of neutron tubes (D-T reaction using spark or Penning design) which will give 10^8 - 10^9 neutrons per second at the flip of a switch with a 500 hour life. If the 7+ cm diameter is compatible with the reactor, I'd think those would be a better way to go than the other sorts.

Anyway this is a bit of a digression from what I think is the interesting topic - how can technological society be rebooted after a disaster?


Well, it still is unpopulated compared to the teeming throngs of the east coast and the old world. I used to live out in Odessa Tx. (Motto: Gateway to Jal;), and traveled a good deal in NM. I'm still not sure some of the people I met weren't hallucinations brought on by the heat, for instance that old guy near Truth-or-Consequences (IIRC) who swore that Billy the Kid escaped and lived to a ripe old age, and that he had known him. (Or Brinton Turkle up in Santa Fe who illustrated children's books in color despite being monochrome color-blind, and such entertaining tales of his childhood in the mortuary industry.)


for minor reference:

comp.risks has the story of a city in NZ breaking it's supply lines. So badly that every time they brought up supply to the city, they burnt out the supply line. The solution was a navy destroyer feeding the city while they went round and turned off everything - then they could actually connect to the remains of the grid.


second reference

as long as we have good solar observation, we get 3-4 days warning of coronal mass ejection.

shame all the spaceships are behind the sun as of feb 2011.


rich!, that sounds like a mangled version of:

I missed that one (being safely in the Docklands at the time). I was around for
An additional substation has since been completed to serve Auckland.



The more complex and interconnected a system (like the world economy) is the -less- vulnerable it is to catastrophe. It works around injuries better; there are more potential alternatives.

Note that the areas that have killing famines are the ones dependent on local food supplies. The great metropolitan centers completely dependent on international trade are the ones with absolute security. When was the last famine in Singapore?

A lot of people find this counterintuitive. I don't see why.


I'm inclined to think of this as a sort of ecosystem kind of problem, e.g.:

cut down 5% of a rain forest, it probably grows back fairly quickly without much loss of biodiversity
cut down 50% of a rain forest, it loses a bunch of diversity and takes proportionally longer to grow back
cut down 95% and well, you're not quite starting from scratch

Maybe my numbers are off but I think I've made my point - jives with @148 but ecosystems tend to reflect a power (I think they use n^3 or 4) of their size in biodiversity, which I think is relevant to recovery time.

I haven't thought this through completely as a paradigm but I think it fits to a certain extent.

Also, I'm inclined to think that even if we bombed ourselves back to the stone age, the cheap energy (mentioned in @65) is still available in U235 or otherwise, and wind and/or hydro could power your centerfuges...


@ 146
We are 1 au = 8 (approx) light-minutes from Sol.
So, 8 minutes after a major coronal mass ejection, it is seen by observatories.
Meanwhile the charged particles, which will cause any disruptive effects are travelling much more slowly, are they not?
The "Solar wind" has (normally) two components, travelling at (approx) 400 & 750 kps, whereas light is travelling at 3*10^5 mps.
So we have the time-differential between those two speeds, over that distance, to do something about it - like power everything down, that we can afford to.


Steve, you're correct up to a point -- for cases of supply disruption where there are multiple sources of the commodity in question. Where there's a single monopoly supplier there's often a choke point that threatens a single point of failure. And where there's a highly specialised market with a couple of big incumbents, all it takes is a couple of co-incidents to drop everyone in a world of hurt.

The classic cases are utilities. Electricity: if something fucks with your national-level grid there will be serious problems. How serious depends on how electricity-dependent your infrastructure is, and other issues: for example, I can see the effects of the same hypothetical three day grid outage in the US south-west ranging between trivial and a cause of mass fatalities, depending on whether it occured in spring/autumn, or in the middle of an extreme heat wave in summer (with daytime temperatures spiking to over 44 celsius -- at which point, in the absence of aircon, mammals tend to keel over and die).

Again, water: back in the mid-1980s, the primary mission of the Singaporean armed forces was, in event of a crisis, to blitz their way across the border and thrust 10-20Km into Malaysia, to secure the reservoirs from which the city state drew 80% of its drinking water. (That may have changed since then.)

But there may be other critical choke-points. I'm interested in identifying what they are and where they're emerging. High tension transformers might turn out to be one, in event of a repeat of the Carrington event. FLASH memory isn't yet deeply-enough embedded to cause problems if the supply is interrupted in the short term, but ...hmm, how many chemical plants are equipped for manufacturing and storing chlorine trifluoride? (Just about the strongest known oxidizing agent, extremely hazardous, used in small quantities for cleaning equipment in chip fabs.) I wouldn't be surprised if there were only 1-3 ClF3 factories on the planet, and it's not obvious what could substitute for it.


The point Lars could have made is that there are at least four possible answers to the question "why are humans warm-blooded", as there are to any "why" question - loosely speaking it's the Four Causes.

Why are humans warm-blooded?

Because they generate enough heat to keep body temperature higher than ambient.

Because they inherited the trait from their ancestors.

Because the human hypothalamus monitors body temperature and regulates it by triggering sweating, shivering and various other control mechanisms.

Because they derive an evolutionary advantage from being able to maintain activity in a wide range of temperature environments.

All correct answers.


Charlie, the Third Reich was manufacturing chlorine trifluoride in decatonne quantities right until the Red Army rolled over their plant. We're talking a deeply distressed economy, about the size of modern South Africa's, using seventy-year-old technology and an eighty-year-old synthesis.

That's not a choke point.


More then you ever wanted to know about the US Electric grid


Ooh, thanks. I didn't know that. Okay, so any supply-side shortages are down to lack of demand ...

Okay, let's re-run the question, with something that is in short supply. I'd pick rare earth elements (the principal supplier of which is currently China) although I suspect the long-mothballed mines in the US are about to get dusted off if the Chinese are serious about not exporting these materials ...


It's hard. You want something with very low long-term price inelasticities of both supply and demand -- the consumers have to buy it (essential, it can never be substituted, etc), and it's impossible for the producers to make more of it, no matter how much money you throw at them. Even very basic things like food and energy don't fit the bill.

I'd have to say we're just not at that point; and given the leveling off of basic needs compared to total income (Engel's Law -- not that Engels, the other one) we might never be. We're actually a fairly easy species to satisfy: good sex, comfortable shoes, and a warm place to go to the bathroom? doesn't sound half-bad to me.


High tension transformers might turn out to be one, in event of a repeat of the Carrington event.

I note that the Wikileaked cable listing key infrastructures outside the US mentions a remarkable number of manufacturers of heavy electrical components and specifically transformers. You'd be surprised which countries make them.


Here we are. Really big transformers (that are used in the US grid) from South Korea, Germany, and (oddly enough) Mexico.


There's another way of looking at this problem.
Suppose you have some hypothesised *EVENT* that has crewed something vital, that our society uses - lets suppose its a bug/bacterium that eats IC chips, or some internal component of same - what then?
You can see it happening, and you KNOW that you've got to (temporarily) regress your technology/industry for some years, until you work-around the problem.

In this case, one would have to re-set to sometime between 1950 and 1970.
Suprisingly difficult. A lot of manufacturing techniques, even from so recent a time, have been "lost" - at least to the point where you would have to re-construct the factories to make those things in that technology.

Alternative view - a lot of technologies will carry on for a long time, provided maintenance is done and provided some evilly-minded person(s) doesn't screw them up - then you are fucked.
Classic example (History again!)
Although the "Empire" had long moved to Byzantium, and Rome had been sacked by Alaric in 410, it wasn't until at least two sieges later, that the Goths cut the aqueducts to that city in (?)527.
Which stuffed it completely as far as civilisation IN ROME was concerned.


There's lots of rare earth in Afghanistan, but it'll take a long time to get.


Dude. Rome was Byzantine for two centuries after that. The aqueducts were repaired. You can go see the Column of Phocas in the Roman Forum, erected in 609. Rome just wasn't the local capital: that was Ravenna.

Then, of course, the coronation of Charlemagne in Rome, Christmas 800 AD. Kind of famous.


WERE the aqueducts repaired - sure about that?
Yes, we know about Karl der Grosse's coronation - so what Rome was - er- SYMBOLIC. It just wasn't important.
( The Lombards' capital was - erm - Pavia -then Milan (I think))


Since I have (among other works) a book on the Roman aqueducts four feet from my head, and Krautheimer's book on medieval Rome six feet to my right -- and for that matter, Gregorovius on my hard disk -- yes, I am reasonably sure about that.

Byzantine and early medieval Rome weren't as large as imperial Rome, no. That may have had over a million people. But it was still a large European city for its day: maybe a hundred thousand people during the exarchate.

The low point of Rome's population was most likely during the Great Schism in the 14th century, after the Black Plague. Twenty thousand people or so.


@ 164
The claim that Imperial Rome may have/did have a million people was completely exploded by Prof Colin McAvedy (sp?) in his "Atlas of History" set, published by Penguin.
He pointed out the size, on the ground, of Rome, when it did pass a million, some time in the early 30's ... and the size in, say Titus' time ... oops.
The myth lives on, but that is what it is, a myth.
McAvedy also points to the imperial cencus data - and asked: what dictator didn't try to inflate his state's claim to greatness? - followed by some numbers.
Oops again.


Greg, so how many people does the Professor suggest as the population of Rome at the time of Titus?


... dude, you don't know what you're talking about. The million-plus numbers are based on logistics estimates. Here's how Beloch estimated the population in the 1880s:

1) from the area of the ancient city, multiplying it by a high pre-industrial population density, for an upper bound.

2) from the number of imperial donatives: this gives a lower bound.

[2a) one can try to assume a resident/donative ratio, but this is a separate issue, a hard one.]

3) the recorded food supply. Rome required grain from outside Latium. Its foreign policy into the 700s was based on its need to secure overseas grain.

Beloch derived a figure of 800K using these methods. Later demographers have come up with higher numbers using higher amounts of grain imports. Nothing to do with self-aggrandizing imperial censuses.

Other methods include using estimates of water supply, and even back-calculations of attendance of the races held at the Circus Maximus. One assumes a racetrack that can accommodate 250 thousand people is meant to hold 250 thousand people.

Anyhow. McEvedy was a polymath Penguin hired to create its series of historical atlases. (As such, I have a lot of sympathy for him.) He gets cited a lot, mainly by people who have used his work for a quick number and didn't have the time or the background to interpret ongoing debates in population history. And to be fair, at the time he came up with his numbers for Rome (1978), Roman demographic and economic studies were in the doldrums.

But "completely exploded"? "A myth"? I'm afraid your partisanship says something more about you and the way you form opinions rather than actual research.


Concerning U.S. rare earth mines, I believe all the equipment was sold or scrapped, reopening will not be quick or cheap. About what one can expect when national industrial policy is "Whatever the money wants".


@ 166 / 7
IIRC - and without digging book out of storage (don't ask) somewhere in the 200-300,000 range.

Which seems reasoable.

I take the logistic point, but it wasn't until the railways came along that really large cities became viable, in terms of moving materials (including food) both in and out. Port-cities could, of course, be larger than landbonud ones, and of the latter, if you had a readily navigable river, that also allowed said urban area to be larger in population terms.


Tangentally, flash memory has a number of upsides compared to non-solid-state disks. I was, for a while, considering cryogenic cooling with liquid nitrogen for a few machines; I decided that it would be infeasible to do this with machines that have moving parts, but aside from fans the only moving parts in a typical machine are in disk drives. Flash memory would be a good match for this type of thing, especially since in the kind of task for which you'd want a couple slightly overclocked bare motherboards in liquid nitrogen you probably won't have problems with swapping (if you can afford this kind of rig, you can afford shitloads of ram), nor will you want lots of storage (since this setup is mostly good for something where the CPU is the bottleneck). It may be infeasible due to the operating temperatures of particular components, which is something I haven't researched.

Yes, flash now (practically speaking) has a similar life expectancy as normal spinning-platters drives -- and is probably better than some spinning-platter drives in conditions with lots of jostling, sudden changes in temperature and magnetic fields, cosmic rays (lol), etc. This is mostly due to being tricksy about where to store bits, though, and when to store them, rather than a fundamental change in the expected lifetime of individual cells. I would trust a flash drive now in, say, a netbook -- but I still cringed when Vista came out and Microsoft was suggesting that everyone buy thumbdrives to use for swap space (though it lowered the price of thumbdrives significantly, and I wasn't running windows anyhow, so it was a good deal for me).

Does anyone know how memristors are stacking up now in terms of the life-cycle dept? It seems like it should be pretty trivial to make them store bits.


Another example of interdependency and lack of flex in the system.


As others have mentioned, Chlorine Trifluoride was made as a rocket fuel during WW2 as it was the most powerful oxidizer known at that time (and for all I know, it still is, as it beats the pants off of liquid oxygen). My favo(u)rite passage from Ignition! An Informal History of Liquid Rocket Propellants (written by John Clark, a history of liquid fuel propellants is:

It is, of course, extremely toxic, but that's the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water — with which it reacts explosively. It can be kept in some of the ordinary structural metals — steel, copper, aluminium, etc. — because of the formation of a thin film of insoluble metal fluoride which protects the bulk of the metal, just as the invisible coat of oxide on aluminium keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.

Anything capable of burning/oxidizing water deserves your extreme respect. And the ability to oxidize silicon oxide (aka sand) is the property it is desired in semiconductor fabrication.


So we have something this reactive being made, around seventy years ago, in tens of tonnes, as an oxidiser for rockets?

We all know the Nazis were crazy, but isn't this pushing it a bit?

(I could believe it of the Manhattan Project.)


They were more interested in ClF3 as a monofuel for flame throwers. (Yes, I know it's an oxidizer, not a fuel.) Squirt ClF3 on anything and what it hits will burn. As it'll chew its way through concrete, the potential for bunker-busting should be obvious. Trouble is, it's too dangerous to handle (flame thrower designs that tend to melt and dissolve the operator are unpopular for good reason).


#152 para 3 - Equally, in 1941/42, the reason Singapore fell so easily to the Japanese was that they blitzed down the peninsula, and cut the water pipes from the reservoirs to the city, rather than trying an opposed amphibious landing as the British defenders had long presumed.

#174 - Very true, and equally this was the big issue with the Me-163 Komet; the Hydrogen Peroxide oxidiser tended to dissolve the pilot if it leaked into the cockpit.


Indeed - C-Stoff and T-Stoff (the latter being your oxidiser) were hypergolic, and getting either where the other had been was disastrous. Also, your aircrew having to wear rubberised suits when handling your aircraft was not terribly desirable under combat conditions.


Hmmm spam with an apparent link which doesn't lead anywhere. (No. 177)

Oh the fun you could have with some ClF3, in ways which would get you arrested as a terrorist.


the middle of an extreme heat wave in summer (with daytime temperatures spiking to over 44 celsius -- at which point, in the absence of aircon, mammals tend to keel over and die).

Not actually true - it regularly gets hotter than this in a lot of places where humans have been quite happily living for thousands of years. For example, Iraq. The average maximum temperature in a Baghdad summer is 44C.


Spam nuked. Thanks.


Ajay: I elided the humidity requirement. As long as it's under 100% relative humidity, we can cool ourselves by sweating. 100% rh and temperatures averaging over our core body temperature for too long, however, are a problem, and if we go over 42 celsius for a significant length of time we tend to die -- a whole bunch of the enzymes our metabolic processes run on are denatured by heating above this temperature.

It takes a while to bring a compact object mostly consisting of water into thermal equilibrium with its surroundings, especially if it's got a built-in evaporative cooling mechanism, but if those surroundings are too hot and evaporative cooling is rendered ineffective, you end up with heat stroke -- a life-threatening condition.


Yes, the humidity is the killer. I have sat more than once in a sauna with the air temperature reaching 100C, and indeed 60C is cold for a sauna.

Dump that ladle full of cold water on the coals, and within seconds it's intolerable, even though the actual thermometer temperature may have dropped slightly.


I'm not knowledgeable enough to comment much on Rome, but Tang dynasty China had over a million in the capital, and supported something in the region of ~50-80 million across the country between 600 & 900 AD.
And there were definitely no railroads in sight, although to be fair most major cities were on rivers.

IIRC Rome is on the Tiber, which should have been navigable enough, so I'd expect the ~1M figure to be at least plausible.


As an aside to the discussion, if you are interested in weird and wonderful chemistry, I can thoroughly recommend Derek Lowe's blog, especially the Things I Won't Work With section, which covers a whole range of eye watering disasters in the making...


@ 182
The port for Rome was Ostia.
Everything of any size, or bulk had to be catred uo from there .....


IIRC, the US Air Force did some experiments in the 1950s and '60s to determine how high a temperature pilots could stand for how long. They put their subjects into ovens and subjected them to dry heat; I can't find any information from back then just now, but I think the temperatures were up around 250°F for at least 5 minutes.


The problem as those temperatures is usually not so much how hot the dry air is so much as that touching any surface that has reached the same temperature will be distinctly painful.

That's why saunas tend to be surfaced with wood, which has a relatively low heat capacity and a pretty low heat conductivity. With care, you can sit on a wooden bench with only a towel between you and it.

However at 120C I can imagine that you're getting to the point where the inside of the respiratory tract starts getting overly heated.


A quick Google suggests they used barges. They were certainly used in Brittania--The Foss Dyke between the Witham and Trent was originally a Roman canal.

According to Procopius the barges on the Tiber were hauled by oxen.


USS Lexington, Tacoma, Washington, 1929. It helped that the USN was fond of turbo-electric drives for its capital ships at the time, rather than gearing the turbines directly to the propellers like most other navies.


The headline in the Spectrum article is misleading. From the article:

[...] returned to close to 100% normal operation at 15:00 on December 10. This marks the recovery from the stoppage of part of the facility's fabrication equipment caused by a momentary drop in voltage at 5:21AM on December 8.

So, it's more like a two-and-a-half-day degradation in production capacity at this plant is going to cause a two-month-long problem with flash supplies. I suppose it doesn't sound quite so striking in that case, though.



About this Entry

This page contains a single entry by Charlie Stross published on December 14, 2010 1:35 PM.

A message from our sponsors was the previous entry in this blog.

Interview is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog