Back to: Interstitial note | Forward to: Some news about the Hugo voters packet

The Snowden leaks; a meta-narrative

I don't need to tell you about the global surveillance disclosures of 2013 to the present—it's no exaggeration to call them the biggest secret intelligence leak in history, a monumental gaffe (from the perspective of the espionage-industrial complex) and a security officer's worst nightmare.

But it occurs to me that it's worth pointing out that the NSA set themselves up for it by preventing the early internet specifications from including transport layer encryption.

At every step in the development of the public internet the NSA systematically lobbied for weaker security, to enhance their own information-gathering capabilities. The trouble is, the success of the internet protocols created a networking monoculture that the NSA themselves came to rely on for their internal infrastructure. The same security holes that the NSA relied on to gain access to your (or Osama bin Laden's) email allowed gangsters to steal passwords and login credentials and credit card numbers. And ultimately these same baked-in security holes allowed Edward Snowden—who, let us remember, is merely one guy: a talented system administrator and programmer, but no Clark Kent—to rampage through their internal information systems.

The moral of the story is clear: be very cautious about poisoning the banquet you serve your guests, lest you end up accidentally ingesting it yourself. And there's an unpalatable (to spooks) corollary: we the public aren't going to get a crime-free secure internet unless we re-engineer it to be NSA-proof. And because of the current idiotic fad for outsourcing key competences from the public to the private sector, the security-industrial contractors who benefit from the 80% of the NSA's budget that is outsourced are good for $60-80Bn a year. That means we can expect a firehose of lobbying slush funds to be directed against attempts to make the internet NSA-proof.

Worse. Even though the pursuit of this obsession with surveillance in the name of security is rendering our critical infrastructure insecure by design, making massive denial of service attacks and infrastructure attacks possible, any such attacks will be interpreted as a rationale to double-down on the very surveillance-friendly policies that make them possible. It's a self-reinforcing failure mode, and the more it fails the worse it will get. Sort of like the war on drugs, if the war on drugs had the capability to overflow and reprogram your next car's autopilot and drive you into a bridge support, or to fry your insulin pump, or empty your bank account, or cause grid blackouts and air traffic control outages. Because that's what the internet of things means: the secret police have installed locks in everything and the criminals are now selling each other skeleton keys.

The only way out of this I can see is to abolish the secret police and build out a new secure internet before the inevitable processes of institutional change generate a new rationale for spying on us. Unfortunately I see no way (at present) to pursue this agenda.

316 Comments

1:

On the point of hacking insulin pumps, have you seen this? http://www.wired.com/2014/04/hospital-equipment-vulnerable/

Hacking your insulin pump is amongst the least dangerous things possible...

2:

Yes, I've seen that. (Bear in mind that dumping the entire contents of an insulin pump in one infusion is capable of causing an acute hypoglycemia incident that may be fatal if nobody is around to administer glucagon or i/v glucose.)

NOTE: Before drawing a security-related item to my attention please be aware that I've been following the Risks digest since 1989 ...

3:

Dan Carlin has made the parallel between The War on Drugs (and the erosion on freedoms to "fight" it) to these concerns.

But as you say, even worse.

How do you or can you dismantle a secret police's apparatus from within its society, barring a catastrophe?

4:

Why do you think transport layer encryption would have helped NSA against Snowden? As I see it it was an insufficient security regime of NSA and its outside contractors that made it possible.

5:

It's not just the transport layer (although if we had a secure internet the NSA would have had orders of magnitude less damaging information for Snowden to grab) but the entire culture of insecurity that they promoted.

6:

From what I've heard, the NSA dislikes transport layer security even internally on secure networks (or maybe especially on secure networks). This is because it defeats their attempts at Mandatory Access Control - if everyone is using SSH tunnels inside the organisation, you'll never spot Snowden or Manning sneaking stuff out.

7:

Well, look how well that played out.

8:

I would argue that the NSA's behaviour, acting against security, built an internal anti-security culture. Remember, a part of their role is to assure the security of US communications.

They could have created transport-layer security which had backdoors they could exploit, but which made all the bad-things harder. But no, they didn't provide any security against anything.

The biggest embarrassment from Snowden should not be that they spy on people. That is part of their publicly declared job. It is that they so dismally failed at their declared task of protecting the USA (and, incidentally, the rest of the world).

And Snowden, all on his own, makes the Cambridge Five look like pikers.

9:

Snowden is "Big Data" applied to government leaks.

10:

It occurs to me that the NSA have also made rods for their own back in several other ways.

Firstly, although they demand the absolute loyalty of their contracted employees, they do not really go out of their way to reward such loyalty, indeed their policy of using external contractors on fairly short contracts with no other carrot to keep these people on-side would seem foolish in the extreme. Back when all their employees were in-house, a job with the NSA was a well-rewarded job for life; the successful employee had a well-paid and secure job for as long as they wanted it.

Nowadays a contractor has little incentive save for punishment to keep him on-side, and the thought occurs to me that heavy indoctrination on the lines of "Always do what is right" together with indoctrination which ties into religious belief is also highly prone to backfiring. The espionage industry is notorious for grey areas, indeed it is nothing but morally ambiguous grey areas which boil down to whose gang you owe allegiance to. Taking young, religious people who aren't worldly wise and using this sort of "Always do the right thing for God and country" then exposing them to all manner of morally dubious information is asking for trouble. All the recruit has to do is look at this, conclude that the country ain't on the side of the angels and hey presto, another Bradley Manning.

Secondly, their policy of mass internet surveillance doesn't seem to be actually gaining them very much. It is an extremely expensive way of not catching terrorists, in fact. Maintaining an unpopular policy that doesn't do what it is supposed to do and which costs huge sums of money is really incredibly stupid.

Thirdly, as OGH points out, security holes can be exploited by everyone who finds them. Either the government mandates secure back-doors in products (and then has the problem of getting rid of products which lack the back doors) or it suffers the consequences.

All in all, I do believe the NSA have painted themselves into something of a corner here.

11:

On another, but equally important, note, the reported "open secret in Washington" is that the NSA's primary job is economic espionage, and that the war on terror is simply a cover for that. In other words, we're supposedly simply doing what we accuse the Chinese of doing to us. If this is the case, we can't expect the online War on Terror to wind down anytime soon, even if it's obviously totally failing to do its job. It's the cover story, not the primary operation.

We should also note Steward Brand's 1999 comment: the internet could “easily become the Legacy System from Hell that holds civilization hostage. The system doesn’t really work, it can’t be fixed, no one understands it, no one is in charge of it, it can’t be lived without, and it gets worse every year.” I had a little fun with how the NSA systematically (if unintentionally) set up the internet as the LSFH back in January (link for the bored), and I still think that's a likely way it's going to play out.

12:

I agree that there is no incentive to be loyal - you employee contractors on short term contracts, often on a lowest bid basis, yet you expect life long secrecy and loyalty. Why?

Of course, the people issuing the contracts are a generation older, or a couple of decades older anyway, and have grown up having a job for life, within the system and being rewarded for playing nicely within it. Talk about a mismatch of expectations.

13:

The message I got from Poul-Henning's keynote at FOSDEM (and my reason for writing this comment is at least partially to get him to confirm it) was that as harmful as the NSA might be, the IT industry is quite capable of making an unholy mess of security all on its own.

This is because security is disturbingly complex at every level; Cryptography is basically a game between Math PhDs poking holes in each other's work for fun, and, if they're working for the NSA, profit. But even when cryptography's sound, there are many opportunities for people to use it incorrectly. Your average web application contains a myriad of vulnerabilities that have nothing to do with bad cryptography or NSA sabotage, and everything to do with the complexity inherent in software development.

And security is intangible, can't be measured, easily gauged, and it's not rewarded.

Until a few weeks ago, I seriously believed that OpenSSL was a proven, robust security library, developed by a well-funded large group of highly responsible programmers. It took the schadenfreude-filled analysis of the OpenBSD developers for me to realize that this was not the case.

Again, there's no need to blame the state of OpenSSL on a conspiracy; the code was always there for everyone to read. Few people did. Most of them wrung their hands. The people who actually did something about it (like the authors of GNUTLS) don't seem to have improved matters much.

How did OpenSSL get to its current state? It's easy to imagine the scenario: OpenSSL was funded by consultancy work, which seems to have mostly consisted of large corporations and government institutions paying the developers to port it to yet another ancient mainframe or Unix flavour.

Those corporations didn't care about the actual security of OpenSSL, they cared about making their ancient business software comply with some well-intentioned piece of government regulation. And they got what they paid for.

As much as I like the scenario of software engineers saving the world, if you were to redesign the internet today, you'd get something that's a bit better, but not a whole lot. Not unless you magically fix the incentives of all the developers involved to be tied to security.

14:

Maintaining an unpopular policy that doesn't do what it is supposed to do and which costs huge sums of money is really incredibly stupid. EXACTLY like the "War on Drugs" in fact ... except nore & more voices are being raised on that subject as a futile waste of time & money, sooner or later, little by little it is going to be dismantled, as steps appear to already be on the way in some places. How long before this trope spreads to the "security" fiasco is another question. Possibly, of all people, the internal bean-counters & auditors may be the ones to have an effect here? Because, in the end, there IS a limit to the money you spend on "security", whilst trying to keep you state/system running - that's what fucked the "Soviet" system in the end, wasn't it?

15:

EXACTLY like the "War on Drugs" in fact ...

The war on drugs does exactly what it's supposed to do; except that's not what the description on the tin says.

The tin says, "we combat the eeeevils of corrupting drug use!" (Which makes little or no sense when you look at the far worse carnage caused by alcohol, tobacco, and automobiles, not to mention running with scissors).

What it's actually about ... well, it provides a deniable pretext for a war on the USA's underclass, with a barely concealed subtext of racism (just look at differential sentencing for drugs convictions between white and non-white demographics). It also provides a money trough for police forces and the prison industries, for all the security equipment suppliers, for dope testing and rehab and the whole panoply of drug interdiction. It also keeps prices artificially high due to manufactured scarcity, which offers a vital lifeline to organised crime -- a sector populated by entrepreneurs too incompetent to cut it in the regulated and paperwork-obsessed legal commercial sectors.

So no, the war on drugs isn't pointless. It's just that nobody in their right mind would want to make that particular point.

16:

No. The Soviet system was screwed by two things. The first was the failure of central planning. The second was it's vast and unproductive military budget.

It is far from certain that the most productive system is not Fascism. China has shown how state directed Capitalism plus a dynamic party dictatorship can be very efficient.

Liberal democracy may just be brief historical anomaly, not even really much outliving the Soviets.

17:

While the US security complex has lobbied against encryption tools in the public domain, I an not sure htat hte absence of such would have resulted in a sercurer internet. The more i read on the subject is that encryption is hard and fragile. In fact as far as i can see most software is pretty fragine and will remain so because outside NASA no one will pay for and spend the time for really robust code.

As for Snowden's leaks it appears to me, that too much crap is classified secret and Top secret and so anyone with those clearences have access to too much crap.

Also Snowden shows that you cannot have loyal outsourced minions.

If you want loyal minions, pay the well, treat them well and have a good pension plan.

This is the untilate weakness of the current oligarch types. They think loyalty can be bought but will not pay the price to really make it work.

18:
It is far from certain that the most productive system is *not* Fascism. China has shown how state directed Capitalism plus a dynamic party dictatorship can be very efficient.

It's not that efficient. Parts of it are very efficient, but it helps that China is still at the point where it can get phenomenal growth rates just from adding technology and people in more productive arrangements. The state-owned parts of it - the SOEs, the banks - are very inefficient and under-perform economically, even after they already cut loose a couple thousand SOEs to sink or swim in the early 2000s.

19:

Security is hard and fragile, but so are lots of other software problems. There's no particular reason why we can't have good plug and play security for the (fairly small) range of things most people want it for.

In my opinion, the largest remaining flaw in our crypto-primitives sandbox is a lack of a good, publicly verifiable, identity service.

At the moment, we either just live with an insecure first exchange of keys or use certificate authorities.

This is exacerbated by the lack of a good system to move key material around between a typical user's many machines.

20:

Don't get too tangled up in technical mechanisms. (This is one of the NSA's mistakes.)

Everybody is acting to minimize their insecurity; this includes oligarchs, people who want to be mistaken for oligarchs, and people who want to curry favour with oligarchs.

The insecurity the oligarchs want to minimize is that the system has failed, on the one hand, and the consequence recognition that, as oligarchs, they're incompetent, on the other. Everything else follows from that; the re-definition of success as "largest pile of money" instead of "greatest capability" (go look at what companies and states bragged about in 1970, and today; it's gone from "we can do amazing things" to "we have a huge pile of cash/this will enlarge our pile of cash".(And tangentially note that the Chinese oligarchs have a terrifying set of interlinked problems they're not likely to solve concentrating their minds. Don't mistake frantic for efficient.))

Anyway; security as process a process is about measuring success, being able to tell with some confidence what the probability is that your information has gone to someone you didn't designate. What we're getting is security as a mechanism of control, using a labelled category of fears to require conformity to a centrally set proper behaviour. This is no different from any other social mechanism to construct a prescriptive normal, it's just moved into the highest-status available social area, which co-incidentally is also the area where most people feel baffled and uneasy because they don't know what's going on. As long as we stick to having any prescriptive normative conduct as the core mechanism, we're going to get exactly this sort of "you should feel privileged to be beaten by the powerful!" behaviour going on. (We also get the transfer of insecurity; wages are too low and contractors are expected to be arbitrarily loyal while rates are raced to the bottom because the folks with power are trying to reduce their insecurity to a disproportionate degree compared to the rest of the population.)

As a system, it's going to collapse and it's going to collapse hard, because you can't supplant a system without the ability to beat the current system in a fight, and the work around -- actual democracy -- has been sufficiently nobbled that you can't get the option of electing someone who isn't a willing participant in the current system. We certainly can't expect anything to win a fight; there isn't a rising pattern of organization able to equip troops, which is what that takes. And the current system has stopped being about what it can do and become a machine for protecting a tiny number of people from facts and consequences.

21:

In fact as far as i can see most software is pretty fragine and will remain so because outside NASA no one will pay for and spend the time for really robust code.

Also, we're building on a legacy of errors.

  • Von Neumann architecture (single address space for code and data) rather than Harvard architecture (separate code and data spaces) won out in the late 40s/early 50s. Which permits execution of data, thereby making stack-smashing and buffer overrun attacks possible. (Mitigated somewhat in more modern microprocessor architectures, especially now RAM is ridiculously cheap, but was at the root of a lot of mischief in the 80s and 90s.)

  • Null-terminated strings in C and similar languages rather than using a couple of extra bytes to record the length of the string properly. Gave rise to a bunch of overrun attacks. Notorious weakness in all UNIX (and Windows) descended OSs through the 80s and 90s. Mostly fixed these days ...

  • MULTICS never went anywhere much; instead we got Ken Thompson's hobbyist hack, UNIX, a cut-down clone of MULTICS minus the big iron features and security architecture because he couldn't cram it into a 16-bit minicomputer with 32Kb of RAM in 1972. UNIX and Linux have been trying to add back that level of security ever since, to the extent that we've now got stuff like (oh the irony) SELinux or MacOS X's ACLs, and IBM VM style virtualization, about 30-40 years too late.

  • Gary Kildall took the wrong day to go fly his plane, so IBM went to Microsoft and bought DOS, which Bill Gates in turn bought from Seattle Microsystems. The right choice for the PC would have been CP/M-86, with the option of Concurrent CP/M-86 once the 286 came along: true pre-emptive multitasking and, IIRC, proper isolation of user accounts. Digital Research were kinda-sorta cloning DEC's better 16-bit minicomputer environments in the 1970s and early 80s (let us not forget that when DEC went 32-bit they begat VMS, which still has fans to this day) and were doing multi-tasking and multi-user properly, or at least better than Microsoft managed until, oh, 2003 or thereabouts.

  • ASCII. Why, oh why? 7 bits? They might as fucking well have stuck to Baudot code. A 50 year mistake we're finally burying at the crossroads with a stake through its heart, a mouth full of garlic, and a gravestone inscribed in UTF-16. As it is, ASCII looks like a cunning conspiracy to lock the non-Anglophone world out of the Anglophone world's computer industry.

  • 22:

    Alas, I think your conclusion is correct.

    23:

    Further evidence for said war can be divined from the moral panics which resulted in each drug's ban in the US in the first place: up until the psychedelics of the 60s, each had a race aspect (Mexican farm labourers for cannabis, African-Americans for cocaine, Chinese for opium) to the push.

    24:
    Null-terminated strings in C and similar languages rather than using a couple of extra bytes to record the length of the string properly

    The real problem is no bounds-checking in C. Pascal-style strings are unpleasant for a lot of reasons, but if you have bounds-checking then you simply use that mechanism -- and it has to work for subsets of arrays, so you get the features you get from, say, passing in the address of the fifth byte of a string.

    While I am willing to argue in favour of C for the kernel, and some low-level libraries, there is simply no justification for using it to write applications. Pointers and buffer overflows are some of the hardest issues with C, and have caused the most problems; by the time you end up writing "safe code" in C, you've spent more effort than you would have in writing in a sane language.

    With C, you also have no consistent error handling. Not exception handling -- although that's an issue as well -- but error handling. What to do in a function when you've got an error. This has led to two different security problems lately: the GNU TLS bug, and the Apple "goto fail" bug. In both cases, the code was trying to check for an error condition, and terminate the function early as a result. (In the GNU TLS bug, the problem was that some code used 0 as error and non-zero as success, while other code used -1 as error, and 0 as success, and then translating between the two caused an issue.) (You could go the route of treating all errors as exceptions, of course. But then your language needs to have a usable, and well-documented, exception mechanism.)

    There is no evidence that the NSA has encouraged the use of C for all levels of programming, so we really can't blame it all on them.

    (I just recently gave a short rant about this at a conference.)

    25:

    Disbanding the NSA all at once would be an even bigger mistake than creating it in the first place. When a few thousand people with a criminally useful skillset who know each other hit the job market at the same time.... well, look what happened when Iraq's army was disbanded. They'd be the hacker version of Los Zetas.

    Much better to cut it 5-10% a year over decades.

    26:

    Hardware costs money and back in the 70s and 80s bits cost bucks so code sharing data space and self-modifying code was the way to go as security at the time involved switching the machine off at night and locking the door on your way out. No networking worth a damn, tapes transferred by the post or in station wagons were the nearest thing to what we've got today.

    7-bit ASCII was for the same cost reasons, eleven bits for a serial character (one start bit, 7 bits of data, a parity bit and two stop bits) over a 110 baud line equals 10 characters per second and the line was so noisy the parity bit was essential, CRC on a per-byte basis. 16-bit Unicode is starting to look a bit constrained nowadays now they've added Klingon and Gallifreyan character pages.

    As for the "coulda been a contender" CP/M-86, yeah right. The killer for DOS was that it was rolled into the purchase price of the IBM PC whereas CP/M and its 8086 kissing cousin cost serious bucks back then. It also used a lot of memory and bits cost bucks like I said. DOS was lean and mean and cheap and it was going to win out anyway, I think. The IBM PC won the computing wars because it was not a multitasking multiuser mini costing five figures and with a four figure a year support contract, it was a single-user desktop that was fit and forget for most companies, step and repeat.

    27:

    Philby, Burgess and McLean were all full-time well-paid employees of a certain nation's security apparatus and that didn't help.

    The real problem is that the NSA knows too much in part because there is so much to know and to cope with the mountains of data out there they need lots of Sherpas, lots and lots of Sherpas and the odds increase that one or more of them will eventually disappear off the mountain with a rucksack of "stuff" no matter how you reward them financially or otherwise. Snowden was trousering over six figures when he bolted if you recall.

    28:

    The killer for DOS was that it was rolled into the purchase price of the IBM PC whereas CP/M and its 8086 kissing cousin cost serious bucks back then. It also used a lot of memory and bits cost bucks like I said. DOS was lean and mean and cheap and it was going to win out anyway, I think.

    Please tell me you're joking?

  • MS sold DOS to non-PC customers in the early days for about 20% less than CP/M-86.

  • DOS started out as QDOS -- an explicit CP/M-86 clone (that, it later transpired once the source code to both OSs was released, ripped off bits of Kildall's BDOS).

  • CP/M-86 and early MS-DOS were about the same size. (Clone, remember?) Concurrent CP/M required a bunch more RAM, obviously, because it was bank-switching and running concurrent multiple user sessions.

  • Lest we forget, back in those days Microsoft was pushing its own multitasking multiuser operating system of the future -- Xenix, AKA Unix System 7. D'oh!

  • Basically, we ended up with MS-DOS because Gary Kildall, IBM's first choice for their PC OS, wasn't in the office the day the skunkworks team from IBM came calling. So they went to visit Bill Gates, and Bill said "sure we can do that," then went out and hastily purchased a product to re-sell to them.

    29:

    Did you ever work with CP/M-86, perchance? We had it on a Intel development system, pile of fetid dingoes kidneys... DOS had a smaller installed memory footprint, it would even work without disk drives off a cassette tape port which the IBM PC had as standard, the floppy disks were an (expensive) option. DOS did less than the full-fig CP/M-86 but it would run on all IBM PCs, even the braindead ones with only 64kB of RAM. Concurrent CP/M was a solution looking for a problem, the IBM PC was never meant to provide multiuser capabilities or even multitask as the hardware had no memory management or protection system. Sure there were bodges added later by third parties to do something like multitasking on the original 5150 but they worked about as well as multitasking on the Mac of the same vintage i.e. not very well. Round robin is not your friend.

    30:

    There is one organisation / cultural group that might have both the resources and motivation to create a more secure Internet: the militaries of the European Union.

    NATO and other western-associated militaries have been moving to IP in place of proprietary protocols. "Ally" does not automatically mean "subservient" and I would bet a lot of military people outside the USA are not at all happy with the Snowden revelations.

    Even a lot of the US military themselves don't like the spooks. There were unprecedented public complaints by military commanders about the slowness of the satellite photo people in the Gulf War(s). And remember the drones in Afghanistan with unencrypted video feeds? Marcus Ranum plausibly suggested that this was becase encryption would have meant NSA involvement which would have delayed everything by years, so they deliberately took the risk.

    France, we're depending on you for IP7 :-)

    31:

    I thought your article was sensible until you digressed into the notion of reengineering the Internet to be NSA-proof.

    The computing power, breadth/depth of intellectual capital, level of technical prowess, and power afforded the NSA enables them to defeat any scheme devised to provide a more secure network.

    Exacerbating matters is the incestuous relationship between the standards bodies, the academy (including the best thinkers), the legislative/executive branches, and the intelligence community.

    The only way you can dispense with the NSA is to deplete its budget or disassemble it. And neither are in the cards.

    33:

    The thing about the War On Drugs is when it was launched, anyone who was serious about either drug hazard reduction or policing to lower crime levels knew damn well it wouldn't work. There was already another example within the living memory of at least one of the principle proponents[1] of the scheme: namely, Prohibition. We know how well THAT worked to reduce alcohol-related problems in the USA... and replace them with a whole new set of alcohol-related problems (such as the lethal effects of some forms of home-made liquor), and a greater number of alcohol-related crimes.

    What Prohibition DID do was empower the domestic spooks - the FBI got its start and its justification for existence out of chasing bootleggers and mobsters, even if nine times out of ten the solutions they came to weren't great. It also empowered domestic law enforcement to a far more muscular version of "protection" than was common in countries which didn't have this sort of system happening. I suspect it was also somewhat behind (or at least provided a lot of positive reinforcement for) the "anything we do is right because we're the good guys" mindset which is so near and dear to the hearts of certain USAliens.

    When the notion of a "War on Drugs" came along, I'm sure there were at least some voices raised pointing out the majority of the "risks" from drug addiction are ones which could be managed much more effectively by a campaign of legalisation and regulation. But they would have been drowned out by the roar from the law enforcement lobby, which could see a clear path to endless expansion of their power and their forces. The domestic and international espionage lobbies would have leaped at the chance for more power and greater scope as well. And here's the thing: they almost certainly knew they were creating a massive opportunity for larger and better organised criminal groups - indeed, their activities facilitate these, by sweeping up the small and the foolish.

    I suspect there's a certain kind of law-enforcement/espionage mindset which generates business for itself by ensuring the criminals exist. If they had their way, buying a lock for your door would be illegal, because your locked front door prevents them from coming into your house at will, and checking through your belongings to find out whether you might be involved in some criminal or subversive activity.

    It's never been about preventing crime or catching criminals. It's always been about controlling the wider population. The same thing goes for the NSA and their lack of interest in internet security - it's not about the crime, it's about the control, at which point knowledge is power. It's panopticon writ large - "you never know who might be looking" - and it's intended to scare people into compliance with authority.

    [1] Ronald Reagan was certainly old enough to remember Prohibition, although he may not have been of the social class to actually be affected by it in any manner other than economically.

    34:

    The NSA isn't magic, but can you be sure they haven't solved fast factorization? They've sure spent a hell of a lot of cash trying over the last 25 years or so, and if they have, no modern system of encryption is secure.

    35:

    If they had solved fast factorization, then why bother with any of the RNG standards attacks? Why plant something you hope will be picked up and used when you can just scoop up everything regardless?

    36:

    Re: the Internet as Legacy System From Hell: as a programmer, this recent essay is the funniest programming-related thing I've read in years, and makes a similar point.

    37:

    Funny. Scary. True in so many ways.

    And the worst part of it is that this was true in the 70s when I got started with commercial programming. And is still true today.

    The biggest problem with security is that is just doesn't sell unless broken. Then is sells only to the extent of fixing the immediate crisis. Trying to bake security in up front is a fight that doesn't normally lead to promotions. And can lead to being sidelined.

    38:

    @29:

    Round robin is not your friend.

    I stayed with DOS as my primary environment until I made the move to Linux.

    On the other hand, I was one of those whackjobs running DESQview/386, which was a pre-emptive multitasking add-on for DOS, and I could (and did) fire up Microsoft Windows in a DESQ task while running multiple copies of PCBoard BBS as background tasks...

    Linux was not only a very similar environment to what I was already running, I'm still running the same DOS text editor that I settled on in 1986... with dosemu and wine and virtualbox, "all your software are belong to us!"

    39:

    If "the open secret in Washington is that the NSA's primary job is economic espionage", then you have to say that that job's not going much better than the cover story. Are America's companies really leaping to the cutting edge of technology development on the basis of the secrets they nick from other nations? Not that I've noticed. It's possible, I suppose, that they would be doing even worse without it, rather in the way we thought Monty was a pretty pisspoor general until we learned about that he knew the other side's plans from Ultra and would have been really shit if he hadn't, but the performance of the American economy is hardly an advertisement for that approach.

    What would happen if NSA was shut down? Well, Agents of Shield seems to be having a stab at that scenario.

    40:

    They [NSA] could have created transport-layer security which had backdoors they could exploit, but which made all the bad-things harder. But no, they didn't provide any security against anything.

    The problem with backdoors in general is that they are usable by other parties than the one who designed it. There is no backdoor which NSA could implement in such a way that they could be certain that they were the only ones using it. ´

    It's of course possible to make the backdoors hard to use. I think influencing the algorithm parameters for encryption libraries is probably the best way of doing this, because that can't be discovered by just reading the code. That didn't work out in the end, though. (I'm also not sure that nobody did manage to figure it out even before the stuff went public.)

    Of course our computers could be (and are starting to be) more restricted than the old MS-DOS-Windows-8086 derivatives. The main problem here is that people still need the hardware in their hands (mostly), and preventing people from modifying that is not easy. The more compact computers (mobile ones for example) and putting more software on the Internet is one way of getting more control on how the computer works.

    One thing against complete computational control of one party, for example NSA, is that they don't really control everything. Other, even allied, countries probably wouldn't like (and don't!) if they had only computers which are in the end controlled by the NSA. One solution for this would be domestic computer production, but it has its problems, too, starting with where are you going to get your microchips and who controls those. Bootstrapping a chip factory with verifiably clean security trail, so you can know that what you want on the chip is there and nobody else knows what it is, is not going to be easy, fast or cheap.

    41:

    While I am willing to argue in favour of C for the kernel, and some low-level libraries, there is simply no justification for using it to write applications. Pointers and buffer overflows are some of the hardest issues with C, and have caused the most problems; by the time you end up writing "safe code" in C, you've spent more effort than you would have in writing in a sane language.

    I agree completely. I have worked with C for years, from kernel development to applications, and I don't think it's suitable for anything over very low level libraries.

    Good coding practices remedy the situation somewhat, like doing strict code reviews, but if you would use a sane language, you gain even more using the same practices.

    My last programming job was a big project using C++. I kind of like the language, but its kitchen sink properties make it a hard language to use. We used Qt (and a toolkit derived from Qt), and that helped to control the features and memory management of C++ quite a bit. Still, I'm more inclined to use higher level languages if I need to program something. I had the opportunity to go to a project with C some time ago, but I had to turn it down, because I really don't want to do anything in C anymore.

    I started programming really with Turbo Pascal, and when I first started using C some 5-6 years later, I really ran into the 'zero-terminated strings with no bounds checking' problem. My practice was to trust the length byte of the string, and when there really was no checking, things went wrong very fast.

    On the other hand, in one summer job doing Ada, the language restrictions and bounds checking were pretty difficult in the other direction.

    42:
    The problem with backdoors in general is that they are usable by other parties than the one who designed it. There is no backdoor which NSA could implement in such a way that they could be certain that they were the only ones using it.

    The nearest solution to this is rolling upgrades and patch fixes. The NSA/FBI creates a toolkit "May2014" using exploits currently available, works with MS to have fixes for these incorporated in the next security release, then expires the toolkit at the end of the month. Of course, co-operation on security weaknesses like this can work both ways... I say FBI because outside the NSA, there is "need" for law enforcement to break IT systems, too. And exploit kits issued to LE officers will leak and get abused, so they need to expire.

    But this only works for "Friendly" companies, who are willing to co-operate, and will fix backdoors anyway. The NSA will need to backdoor home routers and firewalls too, and they typically don't bother with rolling upgrades. Hence they are left full of holes.

    Of course, such a "expiry" policy creates a strong incentive for companies to co-operate with the NSA.

    43:
    f "the open secret in Washington is that the NSA's primary job is economic espionage", then you have to say that that job's not going much better than the cover story. Are America's companies really leaping to the cutting edge of technology development on the basis of the secrets they nick from other nations? Not that I've noticed.

    How about military technologies like aircraft? chip design and fabrication? Compared, for example, to US-made cars?

    There is no particular reason why the US would lead in chip fabrication, or military technologies, but they maintain a consistent lead. Similarly, whatever "industrial policy" is involved in maintaining that lead seems to be military-focussed, as their car industry (which should be similarly high-tech) is so bad.

    What would it look like if US industrial policy was led by economic espionage advantages? The US would prioritise dominating ICT and networking, which would enjoy an advantage in information flowing from espionage. Other industries, less so. The evidence seems consistent with the idea of espionage occuring.

    44:

    Agree with your cynicism, Charlie - & you may be correct as regards the USSA ... but elsewhere? Not so much, not even here, never mind other civilised countries....

    Replying also to Dirk (in the very next post) It is far from certain that the most productive system is not Fascism. China has shown how state directed Capitalism plus a dynamic party dictatorship can be very efficient. Liberal democracy may just be brief historical anomaly, not even really much outliving the Soviets. I really hope you are wrong - or is this some of your "preaching" for your "new order" (I've forgotten what you call it ...) /snark

    45:

    OK I admit higgorance. What IS replacing ASCII? And where can one find code-lists for characterrepresentation? Unless one just Googles the name of the replacement? (Silly me)

    46:

    Snowden was trousering over six figures when he bolted if you recall. Yes & he cliams to have bloted, because, naive fool, he actually believed that the US government wasn't the USSA's Geheimnistaat. He has been vilified as a traitor, but he claims to be a patriot - exposing the corrupt nature of the latter's workings. I'm inclined to believe the second option.

    47:

    megpie Of course the USSA STILL has a ridiculous alchohol problem. IIRC, it is illegal to give your 18-year-old son or daughter a glss of wine in your own home in the states.... Which means that something like 99% of the population deliberately break the law ... The level of stupid is unbelievable .... Also: I suspect there's a certain kind of law-enforcement/espionage mindset which generates business for itself by ensuring the criminals exist. Nah. That's a Religious mindset - I suggest, if you have the stomach for it, too look up the career of Jean Calvin (OR Dominic or Loyola) They don't have to be real criminals, you see, merely ones you don't "approve" of, because they are not "pure" - or some such nonsense. IIRC, didn't the muslim brotherhood start this way in 1950's USA, of all places?

    48:

    UTF-8, or more often UTF-16, is becoming the default. They let you encode all the graphemes we've yet scribbled and scratched, including Klingon and five million varieties of Elvish (or Elven or whatever your favoured word is). It's one line of code to make your website use UTF characters these days, and suddenly non-English names with accents display correctly at once, without some poor sap having to replace them all with html entities, at least if you're using a modern browser.

    And while they potentially or always slurp up more space than ASCII characters both bandwidth and memory are stupendously cheap these days. The ability to transmit even correctly accented French loan words in English, let alone the multitude of world languages that use non-Roman alphabets is well worth it.

    49:

    UTF-8, or more often UTF-16, is becoming the default

    I'm seeing almost exclusively UTF-8. But that's partly because it's usually more compact and is easier to deal with, and partly because of the fact there is no single UTF-16, there's UTF-16EF and UTF-16FE, and you have to worry about which you have.

    (I did write code that used UTF-16, but I consider that to have been a mistake.)

    50:

    Windows, OSX and Qt use UTF-16 internally, and once the data is in memory the endianness is already resolved. Windows also stores Unicode text as UTF-16, which doubles the size of files with ASCII characters only compared to ASCII, ANSI or UTF-8.

    51:

    So - can I import UTF-8 into my Windows XP machine without it exploding? Will it work on Win7? (see below)

    Off-topic ... it looks as though I may have to switch to Win 7 (Shudder) because the Boss' company use it for their interfaces - so she can't work @ home without a compatible system - which rules out UBUNTU Unix ....

    52:

    While I am willing to argue in favour of C for the kernel, and some low-level libraries, there is simply no justification for using it to write applications.

    What use are secure applications on top of an insecure kernel? C isn't suited for safe programming, period.(*) Weak type system, pointer arithmetic, poor modularization, reliance on preprocessor, ... all this makes C unsuitable for programs longer than a couple of hundred lines.

    (*) What would you expect from a language that implements commutative array access?

    53:

    Sure, UTF-8 and ASCII are identical for the range 0-127. If you display UTF-8 text with one of the Windows codepages, you will see chars above 127 as two (or more) random chars from that codepage - just switch the character encoding to Unicode and you will be fine.

    If you use UTF-16 with BOM, Windows will open those files automatically as Unicode.

    54:

    Internally, yes. Or at least the Windows APIs are available in 8-bit and 16-bit versions. Of course, at that point, as you say, endianess is known.

    Externally, I'm seeing a preference for UTF-8. Some applications, ones which don't worry about platform portability, dump into UTF-16 WindowsEndian. But I'm seeing more in my field that write to UTF-8 by preference.

    (I am seeing a lot of xml, true. Argh!)

    Windows also stores Unicode text as UTF-16

    Excuse me if I giggle. It can do so, but I know of no explicit WinStoreText API function that is 16-bit only.

    Binary file formats are another thing. If you're talking about those, I would maintain that they're no flavour of UTF-X, they have chunks of UTF-X embedded. Sometimes both 8 and 16!

    55:

    What use are secure applications on top of an insecure kernel?

    As Charlie points out, it's the processor architecture that's the issue. You complain about C, but C is doing nothing that machine code can't do (and can't do some things machine code can do).

    For me, yesterday's bug fix was noting a buffer overflow and fixing it. No, not in C, in Java. I basically had to add 5 lines to make it closer to what the C equivalent would have been in the first place. Except C would have used realloc rather than new+copy+drop.

    56:

    I'm using XP here. It's perfectly happy with UTF-8. If for example you open Notepad and look at the "File/Save as" option, you'll see the bottom of the dialog box shows Ansi/Unicode/Unicode big-endian/UTF-8 as possibilities. (The 'Unicode' options are the UTF-16 versions we've been talking about. UTF-8 is also Unicode. Go figure.)

    XP was the first consumer version of Windows to be pretty well 32-bit from top to bottom. Except for the 64-bit XP, which was their first step on that road.

    I would consider Windows 7 the best post-XP version. Vista? Eugh - I installed it when it came out, and reverted to XP. Windows 8 and 8.1? Too aimed at touch-screen tablets, and less good in terms of desktop UI.

    XP and 7 are the two versions I use on a daily basis.

    57:

    The nearest solution to this is rolling upgrades and patch fixes. The NSA/FBI creates a toolkit "May2014" using exploits currently available, works with MS to have fixes for these incorporated in the next security release, then expires the toolkit at the end of the month. Of course, co-operation on security weaknesses like this can work both ways...

    Even if you did this in such a manner that you have a list of exploits which are rolled in an update and then removed a month later, there are no guarantees nobody else can use them during that time.

    I can see many possibilities for figuring how to use them. Somebody at MS could leak them by purpose or by accident. Somebody else looking through the updates could realize that some of the updates contain delibrate errors. Somebody might look at network traffic that looks like it tries to exploit something new.

    So, this does little to really prevent other people from using your exploits. It might hinder some attackers, but it will not prevent others completely.

    58:

    I also agree with OGH that the Von Neumann architecture is one of the leading root causes in the insecurity of computers. There were times when it helped to do some stuff, but those times are long past, luckily.

    That is, I hope nobody writes delibrately self-modifying code anywhere. In times of the 8-bit machines of 35 years ago, yes, it was sometimes necessary, but not today. (Please correct me if I'm wrong and it has some real world use!)

    59:

    As Charlie points out, it's the processor architecture that's the issue. You complain about C, but C is doing nothing that machine code can't do (and can't do some things machine code can do).

    The alternative isn't machine code but a safer language than C.

    For me, yesterday's bug fix was noting a buffer overflow and fixing it. No, not in C, in Java.

    I think you mix up buffer overflows (which will cause an exception in Java) and buffer overruns. The latter write outside the allocated memory and over other stack contents like return addresses. Generally you'll find only a handful of failure modes in Java (NRE, array access, alias errors, logic errors, div by 0) after successful compilation.

    60:

    There is no particular reason why the US would lead in chip fabrication, or military technologies, but they maintain a consistent lead.

    Military tech: probably the lead has something to do with the USA accounting for 50% of global military spending by its lonesome. (Of the other half, the EU accounts for 60% -- i.e. 30% of planetary military spending -- but gets a lot less bang for its collective buck due to the buck not being spent collectively, but being split among 25 mini-pentagons.)

    61:

    Let us not forget that QDOS originally stood for "Quick & Dirty Operating System" - not something that fills me with confidence.

    62:

    And that military budget is a huge subsidy for research, it's not just "military-industrial complex" effects. And, perhaps unlike NASA, there's a certain urgency in getting the research into practical use. Military users have a different attitude to risk from accountants.

    63:
    That is, I hope nobody writes delibrately self-modifying code anywhere. In times of the 8-bit machines of 35 years ago, yes, it was sometimes necessary, but not today. (Please correct me if I'm wrong and it has some real world use!)

    Modifying the code that's close enough to being running to be on the same page, no, but writing some new code to memory and then running it is a useful technique even for applications. I wouldn't be be surprised if the major browsers all do it, for instance.

    So you have to have some way of writing to instruction memory, and VN is the easiest way to do that (remember, you also have to allow applications to create new instructions, or fast Javascript/Java/Python/etc runtimes get harder to write). If you can't, you can't even run programs that weren't loaded by something that can. Which rather defeats the usefulness of anything much larger than microcontrollers.

    64: 16

    "The Soviet system was screwed by two things. The first was the failure of central planning. The second was it's vast and unproductive military budget."

    There were multiple interconnected problems. I think one of the most important was that to do something innovative you needed to get permission from multiple layers of bureaucrats who would be blamed more if it failed than they would be rewarded for success.

    In the US system, when you need repeated funding cycles from VCs it can turn out that far more of the effort must be spent on presentations to make it look good, than can be spent on actually making it work.

    "It is far from certain that the most productive system is not Fascism. China has shown how state directed Capitalism plus a dynamic party dictatorship can be very efficient."

    Didn't Japan show exactly the same thing 40 years ago? They didn't have anything like China's natural resources, either.

    One thing I see they have in common is a big surplus of savings. People who don't even want to spend a lot more, but who desperately want to prepare for bad times. Then the economy desperately looks for productive places to invest, and accepts low margins for foreign sales as a preference to no profit at all, and everybody marvels that they are so productive and that they take over so many markets....

    I'm not at all sure what's going on. I see lots of explanations that explain part of it, after the fact.

    65:

    I am tempted to suggest here that this stunning lack of efficiency in the EU's way of doing things may not actually be a bug at all, but instead a feature. One of the EU's proud boasts is that it exists to prevent World War III kicking off in Europe; I would humbly suggest that this is indeed true, but not for the reasons that the EU would wish to state.

    The EU is inefficient, wasteful, corrupt and stupid. Every month of the year, for just one week, its parliament decamps from Brussels and moves to Strasbourg and back again. This alone costs between 150 and 200 million Euros per year, every year. This bureaucratic monster also generates truly mindboggling amounts of regulation, restriction and laws per annum and absorbs huge amounts of money to enforce these rules.

    I therefore submit that the EU is indeed preventing wars, by absorbing money, effort and minds that would otherwise be formenting revolt, rebellion and military adventure into a sort of huge, corrupt game.

    It is therefore vitally important that UKIP, AFD and all the other destabilising loonies be locked up forthwith, along with tongue-in-cheek satirists like my humble self!

    66:

    If they had solved fast factorization, then why bother with any of the RNG standards attacks? Why plant something you hope will be picked up and used when you can just scoop up everything regardless?

    The built-in vulnerabilities have a long lead time; they had to be started years to decades ago. I've heard rumors of some new, unspecified codebreaking capability becoming available in the last year or two.

    It's all guesses and conjecture, but that's the nature of the beast.

    67:

    The problem with backdoors in general is that they are usable by other parties than the one who designed it. There is no backdoor which NSA could implement in such a way that they could be certain that they were the only ones using it.

    Funny thing is, the infamous EC-DBRG random number generator, recently de-standardized by NIST, looks like it may have been exactly that. Here's the theory:

    First off, a lot of crypto relies on good sources of random numbers, for key material, nonces, and the like. If your keys are known, or predictable, or can easily be chosen from a known space a of a few million elements, it doesn't matter what the algorithms are: anyone with a modern laptop can try every possible key quickly enough that you're compromised. This is why a lot of pseudorandom number generators which are good enough for many purposes (e.g., linear congruential) are completely unsuitable for cryptographic use.

    But a random number generator with a cleverly concealed pattern is difficult to tell from a good one, unless you know what pattern you're looking for. So, the trick (for the NSA) is to construct a PRNG whose output contains patterns only they can extract in useful form.

    How could one do this? Public key encryption. The EC-DBRG random number generator has two components. The first is a pseudorandom number generator which is "bad", in the sense that its state can be trivially reconstructed from a few samples of the output. The second is a strange series of operations which just happens to look exactly like encrypting the output of step one with a public key using an elliptic curve algorithm. Thus anyone who had the corresponding private key could invert step two, and recover the samples of the underlying process in step one --- making the output of the whole thing predictable thereafter. But to people without the private key, it's secure.

    The NSA has never admitted that they have the private key required for this, but at this point, everyone (even NIST) is more or less assuming that they do.

    (BTW, there's another approach, if you're capable of messing with hardware designs: bugs that show up only if a very particular sequence of operations occurs. It's easy to verify that hardware is doing what you want it to; it's awfully hard, these days, to verify that it can't be made to do something else when suitably tweaked.)

    68:

    If they had solved fast factorization, then why bother with any of the RNG standards attacks? Why plant something you hope will be picked up and used when you can just scoop up everything regardless?

    Standard doctrine in intelligence is that you must deny insight into your capabilities to your enemies. Otherwise they will figure out ways to work around your capabilities. Cf. why spysat orbits are secret -- if the guys you're snooping on know when your big bird is passing overhead, they can just roll out some tarpaulins and cover up whatever your billion dollar eye is looking for.

    If the NSA had a quantum gizmo for fast factorization, they would absolutely need to keep it secret, simply to prevent their real adversaries from doubling down and switching to One Time Pads. Which would be the cryptanalytic apocalypse.

    69:

    I'd say there are three reasons for US military "supremacy" in tech.

  • Lots of money and a single budget, as noted above. The US National Guard (state-based) isn't known for its technical innovation, even though some parts of the national guard (like the air rescue group) are truly elite by any standard.

  • The US has lots of relatively empty space in which to experiment. This is particularly useful for aircraft (Area 51). I don't think that Europe has any particular patches of empty desert in which to experiment, not that this has stopped the EU countries from coming up with some neat planes, which leads me to

  • We buy lots of tech from other countries. There's a lot of whining and huffing in Washington now about how much NASA depends on the Russian space program, now that Russia has gotten all imperialistic again. The truth of the matter is that the military industrial complex is a large, global industry. In this context, economic espionage should be pretty normal, since every country is buying and selling as well as arming themselves.

  • As for the US lead in military tech, I suspect a few of you didn't see this when it came out: http://deepbluehorizon.blogspot.com/2014/03/mystery-aircraft-photographed-over.html . As for why a black triangle popped up over Texas in broad daylight, it showed up on March 10, right when Russia was annexing Crimea. I suspect it was a gentle reminder to the world that part of the US arsenal is not on public display.

    70:

    We know the NSA does economic espionage; a former CIA director wrote an article in the Wall Street Journal admitting it (archive, paywalled). ECHELON caught Airbus bribing the Saudis and they blew the whistle.

    71:

    I also agree with OGH that the Von Neumann architecture is one of the leading root causes in the insecurity of computers.

    Security is always a trade-off. In this case, without that, you'd have computers that were incapable of just-in-time compilation; of having an excess of data or code memory while the other suffered from too little; perhaps even the inability to load code at run time.

    The actual shortcoming with the architectures was the inability to have pages be executable but not writable; that was a problem with some major architectures, but not all. And AMD and Intel have fixed that finally. So then it goes back to the OS.

    An example of recent self-modifying code: the GNU C Compiler's nested functions created a trampoline that would execute on the stack. The counter approach is Apple's "blocks," which are static and provide much the same set of features.

    72:

    That page on Deep Blue Horizon does not seem to exist any more

    [[ the link had a trailing full stop, which may have caused your browser issues - I've fixed that mod ]]

    73:

    Your modern debugger would probably be very different without the ability to read and write code in memory. On the other hand, the tools we have have uses and features shaped by the architecture we have — I suspect if we'd gone the alternate route that a whole different set of solutions would have arrived.

    And a whole different set of problems too.

    (Self modifying code is deeply useful to make it harder to penetrate software 'protection' features. Which is why viruses use it so much.)

    74:

    That is, I hope nobody writes delibrately self-modifying code anywhere. In times of the 8-bit machines of 35 years ago, yes, it was sometimes necessary, but not today. (Please correct me if I'm wrong and it has some real world use!)

    Forget 8 bit machines. The IBM 360 project was all about big memory (16MB of address space at the time) and a common instruction set across all models. But marketing decided to make memory upgrades the cash cow. So systems had to ship with limited memory. This led to everyone on the software side squeezing bytes out of all code. Turns out the instruction set was such that if you were moving less than 256 bytes around it used less memory to calculate the size of the move and write that into a move instruction as the next step. This led to all kinds of pain when virtual memory came along and suddenly the code base of what should have been static pages became "dirty" as a mater of course.

    Fred Brooks talks about some of this in his book plus I learned a bit more about it in a talk I heard by him recently. Hearing him plus his book plus what I experienced in the 70s and 80s make more sense now. Interesting guy. Seems to be a Mac fan from the early days. :)

    75:

    Some folks with ties to the security industry say the white hats are really pissed off at the black hats just now.

    76:

    If I understand what you're saying, every major browser, including most tablet and smartphone browsers, certainly gives you the option to do it. Javascript and all the javascript libraries rely on it.

    Try turning javascript off and surfing around your favourite sites (including this one, although I think only for submitting comments) and see just how much functionality degrades. Hopefully, if it's well written, pretty gracefully.

    All of that is locally run code modifying the memory - the stuff that your browser actually displays. As a quick example, I've just been writing a web-page that has a button on it. When you click the button it does a variety of things - about 1/3 of the content disappears, another 1/3 appears and the text on the button changes. jQuery (a javascript library) means that's achieved in very little writing, but it's code affecting the content live. Various things have "hidden" added or removed, and the text of the button is completely rewritten. This is so routine it's not even noteworthy.

    More fun, another button will do similar things (different subsets of things moving and different information and buttons changing and some other values affected) and also update information to a database and so on.

    All of this could be done in other ways - the "old" way would be to pass the information to the page in a form submission and reload the page, writing it afresh. This is arguably more secure because there's no mixing of data and code going on but users tend not to like it, especially if you've got a big page with lots of parts - imagine something like the BBC News page or similar. You only want to update parts of that if you have the choice.

    77:

    Not quite. It's possible to write an interpreter that never needs to write to instruction memory (or executable pages on a von neumann machine), but they're slower than ones that can generate machine code.

    A bigger issue is that if you can't write to instruction memory, you can't load new programs into it. Which means you have to come up with some workaround (embedded systems do this by having a way for external hardware to write to the instruction memory), as being able to do that is necessary to avoid having to take up instruction memory for every program the user might wish to run whether it's in use or not, and probably reboot when that set changes.

    78: 24, and also #41

    I'll cheerfully agree with anyone who wants to insult C (including C++ and Chash (Look "#" is a hash sign, not a sharp, unless you're writing music).

    Arguing that strong typing (and strong bounds checking) are anything other than a good thing though does make me wonder a bit.

    79:

    Well, I was thinking much the same things about the 18th Amendment to the USian Constitution, and how people who do not learn from history are doomed to repeat it. So that's an Aussie and a Scot agreeing on that one!

    80:

    AIUI, the same position actually now applies in the UK.!!! Fruitbats!

    81:

    Bullet 2 - Ever hear of these things called "Australia", "Canada" and the "North Atlantic Ocean"? ;-)

    82:

    Cf. why spysat orbits are secret -- if the guys you're snooping on know when your big bird is passing overhead, they can just roll out some tarpaulins and cover up whatever your billion dollar eye is looking for.

    Ah, I spy my favorite hobbyhorse. While it's true that it would be nice if spysat orbits could be kept secret, and it's true that the gummints that launch them declare their orbits to be SECRET, it's also true that their orbits are routinely obtained with relative ease by totally vanilla means. In earlier days, that meant stopwatches and binoculars(*), these days videocameras with GPS timestamp enhancement are becoming popular. From such observations the orbits may be, and are, determined by means going back to Gauss et seq.

    See, e.g., http://sattrackcam.blogspot.com/

    (*) Binoculars are often optional, as many of the interesting satellites are big, in LEO, and quite visible to the naked eye as they go sailing across the sky at dusk and dawn.

    83:

    Yup, but that would be a multinational effort now, wouldn't it? Besides, the French do a pretty good job of fighter development, and I don't think they're flying out of Tahiti...

    84:

    Not necessarily, at least beyond a treaty saying that "nation 1 may use the following real estate in nation 2 as a weapons test and development facility".

    85:

    Hum...is the French- or the British/English/London City State for that matter - doing...

    " New Hypersonic Military Spy Plane " ..

    http://www.youtube.com/watch?v=Jot1pUJmcUA

    Which is fairly out in the open on u-tube isn't it? So what do the US of Americans have that isn't out there in the open, that can't be afforded by the English/ London City State? And what weapons system might the latest level of Technology be that is carried by that Hypersonic Plane?

    Beyond that what might the next level US of American True Believers BELIVE that the Forces of Satan HAVE? Beyond even that implied by the UKs Tony Blair who apparently still maintains that...Of Course Sadam HAd - WE just didn't look hard enough. So before out Armies of Righteousness Swept forth through Iran to FREE the Holy lands and bring about the second coming of Himself! And so forth...look here...

    "Russia's Fantastic Secret Nikola Tesla Super Scalar Weapons... "

    http://www.tldm.org/news8/sovietelectromagneticattacksonunitedstates.htm

    And bear in mind that the Political Movers who approve the Investment in the US of As Military Stuff do harbour a tiny minority of Religious Folk who are madder than an entire sack full of rabid ferrets.

    86:

    It's an extreme leap to say the War on Drugs would obviously fail because prohibition did. The scale and scope of alchohol prohibition was completely different. You had to deal with things like other countries flagrant refusal to provide any kind of assistance against smugglers, inability to police international waters (look up Rum Row), a police force that didn't start enforcing the law until after they realized that if they started to enforce it they could get bribes out of it. non alchohol drug prohibition was never as deeply unpopular domestically or internationally as alchohol prohibition.

    The international and domestic cooperation that was readily available for the war on drugs was the thing that seemed to be missing, until after the war on drugs turned out to be a dismal failure.

    87:

    Ah, HA! " I don't think they're flying out of Tahiti..." indeed!

    POOR FOOL!! That is what THEY intend that you should think!

    Consider SPECTRE, and its ruthless leader Ernst Stavro Blofeld.

    http://www.youtube.com/watch?v=pIj0Qq0c14o

    88:

    Dam H @ 10:

    I agree that the current US reliance on contractors in security and intelligence is a big problem for that and other reasons. However, its not a factor in Snowden's case according to Snowden himself. He says he first became disillusioned with the American intelligence system while he was working for CIA as an IT tech in Switzerland. When he later applied for the contractor job with NSA it was with the express intent of gathering documents to leak.

    89:

    For your non-techie/geeky readers ... please explain how all of this maps to or connects with 'cloud computing' ...

    For example ... I was of the impression that cloud computing basically meant that my every keystroke is already being automatically recorded in my provider's cloud. Also, because of this, it means that all of my security is constantly being monitored and updated immediately by my provider without my having to do anything. (And potentially screwing things up even more: never underestimate the destructive power of an uninformed user. That's why I use/pay real techies.)

    If so, doesn't this greatly reduce the total number of places/computers that the NSA (or other polities' equivalents) needs to surveille? If this is the case, that there's a small group of actual providers, then shouldn't they be able to create some sort of super-secret society/priesthood of computer engineers to fix this?

    90:

    No.

    Usually what is meant by "cloud computing" means simply keeping storage in a network-reachable environment; usually this means that your local copy is a cache, updated when it changes.

    (There is also the situation where machines [usually virtual] are run on a third-party system, or set of systems, and that is also called "cloud computing," with more accuracy in the name. In this case, you usually connect to it over an encrypted connection -- ssh, vnc, or rdp.)

    When you see "cloud" in pretty much any context, you should instead take it to mean "outsourced storage to Amazon." (Okay, not always Amazon, but surprisingly often.)

    91:

    Usually I read "cloud" as "take my data and rent it back to me".

    92:

    If you read the article, the "economic espionage" the CIA director "admits to" might not be what you would expect.

    I won't cut-and-paste the text, just to avoid messing with copyright. But the key paragraph is in the middle, where he says that examples of where they would spy on foreign companies are (1) monitoring dual-use technology (ie, stuff that can be used for weapons) and (2) watching for violations of economic sanctions.

    He says that they don't spy on foreign companies to benefit US companies, and that even the author of the ECHELON report didn't claim they did.

    He also implies that the EU countries engage in a lot more economic espionage than the US does, which is actually pretty plausible (for a variety of reasons that are surprisingly boring).

    Naturally, you may or may not trust Woolsey (the former CIA director in question) but it's worth looking at what he said exactly.

    93:

    Um.

    It doesn't really require a conspiracy of racial oppression to explain why responsible governments would want to keep drugs like cocaine, methamphetamines, heroin, LSD, opium out of the hands of recreational users.

    Consider the Opium Wars (from the point of view of the Chinese rulers of the time), just to introduce a historical example less loaded with the modern set of trigger topics. (Not that it doesn't have some exciting trigger topics of its own.)

    Addictiveness makes it seriously exploitive to sell these things as commercial products, and it's a lot easier to overdose or otherwise mess yourself up with, say, heroin than it is with tobacco. Ask an ER doc about "drug seeking behavior". And the modern high-intensity forms (crack cocaine, crystal meth, etc) get pretty scary.

    Yeah, if we were starting from scratch, alcohol and tobacco probably wouldn't be legal either - they do stupendous amounts of damage, albeit mostly to people who've been told the risks. But alcohol has been insinuating itself into social norms for thousands of years, and tobacco hurts you very slowly and subtly compared with, say, crack. People are just kinda used to them, and since "politics is the art of the possible", they get a pass.

    Marijuana is the odd man out - might not even be as dangerous as tobacco, although it's more psychoactive/hallucinogenic. There's a case for legalization just to take it out as a cash supply for organized crime, although medical marijuana has probably already delivered a pretty severe hit to that market I would guess.

    Oh, and on the sentencing discrepancies, I haven't scrutinized the numbers, but there are a lot of conflating factors like selling being penalized more severely than buying/using, multiple offenses, connections with violence or organized crime or gangs, which would create similar effects due to demographic differences in the absence of any racial intent ... not to say that there's zero racial intent out there, but sometimes oppressing the underclass just ain't the point.

    Wait, we were talking about Snowden and the NSA, I have a whole different rant about that ...

    94:

    Sean writes, replying to SFreader: Usually what is meant by "cloud computing" means simply keeping storage in a network-reachable environment; usually this means that your local copy is a cache, updated when it changes.

    (There is also the situation where machines [usually virtual] are run on a third-party system, or set of systems, and that is also called "cloud computing," with more accuracy in the name. In this case, you usually connect to it over an encrypted connection -- ssh, vnc, or rdp.)

    When you see "cloud" in pretty much any context, you should instead take it to mean "outsourced storage to Amazon." (Okay, not always Amazon, but surprisingly often.)

    As with most bleeding-edge technologies, the term has been overloaded by Marketing Folk.

    As someone who comes to it from the building-running-large-enterprise-IT-environments point of view, Sean's first instance is "Cloud storage", where disk drives off in someone else's datacenter, and with someone else's management and interface, store your data or a copy of your data (but don't own it). A lot of people use these for remote, off-site backups, including small and a few medium and big companies. A lot also use it for primary storage rather than owning a datacenter full of hard drives, their cooling and power and management headache.

    The second description, for servers running at (Amazon, Rackspace or somewhere else with OpenStack, etc) is what we in enterprise systems term "Cloud computing" most properly these days. There are physical servers at Amazon's datacenters (and Rackspaces, and... several places). Upon those, virtual servers are run, like Vmware or Xen or KVM. Those virtual servers are then rented out to companies, along with the management interface, storage, network bandwidth, and so on and so forth.

    Again, you don't need to worry about datacenter space, power, cooling, etc.

    Usually the cloud (as a whole) is done better and more redundantly than the typical small IT operation. But at one client, as Hurricane Sandy was starting to wash ashore, we found out that its website operational datacenter, its source code repositories' datacenter, the datacenter for its DNS service, and the datacenter for its content cacheing site were all in the Delaware-Virginia estimated landfall track. THAT was an exciting weekend... Fortunately (for us) it went north, but copying everything out to California in a hurry was excitement...

    95:

    It doesn't really require a conspiracy of racial oppression to explain why responsible governments would want to keep drugs like cocaine, methamphetamines, heroin, LSD, opium out of the hands of recreational users.

    We already have effective processes and systems in place for achieving the objective of "keeping dangerous substances away from casual users" - you go and have the medical experts look you over and based on that you get a prescription. This system works quite well.

    In the case of stopping drugs, the initiative is clearly driven by religious nuttyness: There is NO evaluation of risks versus potential usefulness, all drugs will send the user to the Flames of Hell - lure them onto a drug that will.

    Potential medicine cannot be even tested just because "they might be fun to someone, somewhere". For example the Psychedelics, there are indications that some (like psilocybin) are highly effective in curing depression. These are all "Class A" drugs, research is banned, even though these drugs are neither very toxic nor addictive.

    I will bet that depression kills more people (and causes more murders) than "magick musrooms" / LSD / 2CB e.t.c ever have, even if we include the Vikings. A cure should override religious feelings always in modern society.

    The same with Cannabis; It works against cancer pains. If someone has cancer, it goes to the bones, this really hurts a lot, the pain even defeating morphine - WHY is it critically important for society that this person, who will probably die soon anyway, do not "take drugs"?!

    Religious Nuttiness!

    This official sanctioned attitude is that any substance that have recreational use is 100% Evil, because of that property only. The thinking is Black & White - Never Compromise, Never Give In, Never Forgive Transgressions and Always use totally unproportional punishments, just like the True Lord of The Old Testament (which is not that soft-on-sinners slack-wristed Jesus character, who forgives things and stuff).

    That is religious nuttiness lived out.

    96:

    Classic Nazi fascism and variants are well described in "The Anatomy of Fascism" by Paxton. It was not optimal even in its native environment because, as an authoritarian state, the Party wished to control everything, and gave power to party members who had no particular knowledge of the organizations they were controlling. OTOH it was very good at suppressing dissent, and generally appearing more competent than it was. Rapidly changing environments - e.g. technology - favour organisations that can select for expertise, not loyalty, and that can harness dissenting voices to spot both errors and opportunities quickly. Liberal democracy may be reliant on such periods of change and the resulting opportunities to learn. Even within the UK, organisational cultures reflect the demands on them - OGH has noticed that the Laundry research areas have a different organisational culture from the Artist's Rifles. The Chinese are an interesting example precisely because they are sufficiently sophisticated to think about such things and e.g. give some partial tolerance to biddable religions.

    97:

    Er, so your argument is that Prohibition failed because other nations refused to enforce USian law? Oh yes, and because underpaying civil servants makes them relatively easily corruptable.

    98:

    Well, I can't actually agree this beyond the points about prescriptions and "cannabis working as pain relief where all else has failed", but only because you know more than I do about the subject.

    99:

    It isn't just the industrial-scale corporate corruption (Making even Washington look small) that bothers me about the EU. ALso thoroughly screwing the little man & the consumer, incidentally. It's their attitude to law & justice, like the European Arrest Warrant & their apparent campagn against Common Law. The EU desperately needs reform, but I don;lt think it's going to get it. Vote UKIP to give'em a scare & then switch back to a Mainstream party? Errr ....

    100:

    I don't think that Europe has any particular patches of empty desert in which to experiment...

    There's the Błędów Desert in Poland. According to https://en.wikipedia.org/wiki/B%C5%82%C4%99d%C3%B3w_Desert, "During the Second World War, the area was used by the German Afrika Korps for training and testing equipment before deployment in Africa".

    101:

    Thinking about it, I've done some occasional hobby work on Furphy, a functional language that uses the Forth two stack virtual machine. Both Furphy and Forth do incremental compilation, and if optimised they would generate new machine code as they went. Even if you had fully interpreted versions, Forth ordinarily allows you to define new machine code snippets in its programming environment, and stopping that could cripple some things.

    102:

    Addictive drugs that sedate, kill pain or mildly intoxicate aren't a threat to consumerist society. Psychedelics are a different story, and are universally banned -- probably because, as Terence McKenna said: “Part of what psychedelics do is they decondition you from cultural values. This is what makes it such a political hot potato. Since all culture is a kind of con game, the most dangerous candy you can hand out is one which causes people to start questioning the rules of the game.”

    103:

    I agree with you about the EU's attitude to "British Common Law" (I think their views tend more towards the "Code Napoleon") but not about UKIP. Leaving aside any arguments about whether or not they're actually a crypto-Fascist party, saying "I dislike Nigel Farange" is not a racist statement coming from an Englishman, and the fact of my being Scottish and him being English does not make it any more so.

    104:

    The Chinese are an interesting example precisely because they are sufficiently sophisticated to think about such things and e.g. give some partial tolerance to biddable religions.

    They also have some serious cultural memory about such things switching and becoming serious threats, even if they weren't cover for subversion to begin with, e.g. the Tai Ping Rebellion and the Red and Yellow Turbans (one of those helped the Ming Dynasty get started, I forget which).

    105:

    My vague knowledge of Chinese history suggests that the Yellow Turbans were one of the factors in the downfall of the Han, and one of the (at least 2) Red Turban rebellions did help the Ming dynasty get started.

    I don't see any relationship to drugs in either of those, but possibly the second RTR in the 1800s did.

    106:

    I think Alex (the cartoon) has some of the best comments on Nigel Farrago of rubbish here http://www.alexcartoon.com/

    107:

    For your non-techie/geeky readers ... please explain how all of this maps to or connects with 'cloud computing' ... For example ... I was of the impression that cloud computing basically meant that my every keystroke is already being automatically recorded in my provider's cloud.

    While that is in principle possible, AFAIK it's only done when typing into Google search box or you edit Google docs (well, others do it as well). But usually data is only sent when you fill out web forms or press buttons.

    Also, because of this, it means that all of my security is constantly being monitored and updated immediately by my provider without my having to do anything.

    No. Your provider is responsible for security on the server side and for the web app they provide. Security of your client device is provided by its operating system and whoever administers it (for Android devices that's always you since manufactures don't provide regular updates). If you don't have transport encryption, you also have to trust your internet provider to not make the data stream available to 3rd parties (and with current information that would be highly misplaced trust).

    (And potentially screwing things up even more: never underestimate the destructive power of an uninformed user. That's why I use/pay real techies.)

    Don't underestimate the ability of techies to screw up things. Even a simple act of carelessness can open huge security holes as with the recent Heartbleed bug.

    If so, doesn't this greatly reduce the total number of places/computers that the NSA (or other polities' equivalents) needs to surveille?

    That means it makes work easier for NSA.

    If this is the case, that there's a small group of actual providers, then shouldn't they be able to create some sort of super-secret society/priesthood of computer engineers to fix this?

    Big cloud providers in the US are legally forced to provide data to NSA. So you would have to convince users to avoid Apple, Microsoft, Google, Facebook and Amazon - good luck with that.

    There are already projects like Tor, Diaspora, Avatar and many others which try to provide more privacy, but they don't gain much traction.

    108:

    Thanks for that; I've been missing Alex since he defected from the Indy to the Torygraph.

    109:

    That is, I hope nobody writes delibrately self-modifying code anywhere. In times of the 8-bit machines of 35 years ago, yes, it was sometimes necessary, but not today. (Please correct me if I'm wrong and it has some real world use!)

    There are still problem domains out there where CPU cycles and some or the other aspect of memory capacity are binding constraints. Embedded real-time systems have always been notorious for the hardware being spec'ed first to meet a budget constraint, and the software folks are then expected to get the job done within those limits. Self-modifying, or at least "generate then execute" code; interpreters for small domain-specific languages; corner-cutting on bounds checks and a zillion other bad practices; all of that in regular use.

    Hardware restrictions show up in the oddest ways, often smaller than one would think. For example, on a particular processor, the only way to guarantee a particular piece of code and data runs fast enough might be to fit it into level 1 cache -- which means 64K bytes, even today.

    110:

    Just-In-Time compilers can be interpreted as self-modifying code. That includes not only the Java VM, but emulators like http://en.wikipedia.org/wiki/QEMU. Satellite simulation for mission rehearsal includes both the requirement to simulate the satellite cpu and support chips and a performance requirement - several times real time simulations are useful if the mission timeline is inconvenient for training, rehearsal, and satellite software checkout.

    111:

    Speaking of which, why are some HLL better than others? Why is C generally thought to compile to better code than Pascal or Algol? Is this because humans are crap at compiler writing, or what? Ah, for the days of BCPL or Coral 66... Or that new fangled Ada.

    112:

    To those of you who think that C is hideously unsafe: you're doing it wrong.

    I have one word for you: valgrind. Combine this great tool with a proper test suite and C is as safe as any other language.

    113:

    Thank you everybody who corrected me on the self-modifying code. I learned new things today. Apparently I haven't been thinking about JIT technologies enough, and really haven't seen the edges of computing.

    Developing software in Harvard architecture is quite different - you need a mechanism to write to the program memory, but can't really do it all the time. I've done some things on plain Atmel AVR and a bit more on an Arduino, and they need special arrangements to get the program running.

    I suspect that if we had Harvard architecture computers, much of the things would then be done with bytecode interpreters, in a way circumventing the code-data-memory distinction.

    114:

    C is a stripped-down language, offering few of the features of other languages.

    Let's consider a small bit of code, to sum up the elements of an array of integers. The functions are passed in an array, and a length, and return the sum.

    In C, the code is roughly:

    int sum(int a[], size_t len) { int retval = 0; while (len--) { retval += a[len]; } return retval; }

    In a Pascal-type language, that would be something like:

    FUNCTION sum(a[len:int]:int) BEGIN retval:int; indx:int;

    retval = 0; for (indx = 0; indx < len; indx := indx + 1) BEGIN retval := retval + a[indx]; END; return retval;

    END

    Largely similar, correct? The difference is that the C code would compile to something like:

    MOVE $0, %r1 MOVE %sp(4), %r2 MOVE %sp(0), %r3

    L: DEC %r2 JNZ L2 MOVE %r3, %r4 # get the address of the array ADD %r2 * 4, %r4 # add in the index ADD (%r4), %r1 # add the element to the sum JMP L L2: MOVE %r1, %r0 RET

    (Even that can be optimized a bit.) The Pascal-like code, however, would have:

    MOVE %sp(8), %r5 # Get the limit of the array [...] MOVE %r3, %r4 # get the address of the array ADD %r2 * 4, %r4 # add in the index JOV __overflow_trap # If the addition caused an overflow, abort CMP %r4, %r5 JGE __bounds_trap # If the address is beyond the array end, abort CMP %r4, %r3 JLT __bounds_trap # If we happen to be before the array, abort [...]

    (Again, that can be optimized.)

    So, to do roughly the same thing took several additional instructions, which took up both memory/disk space (in short supply on the machines on which C was developed) and CPU time (always in short supply on every computer ever developed). And while it's only a small amount here, that's on every single array reference.

    The result is that, with simple compilers, C is faster. In many cases, by a whole lot. (And, as a corollary, a C compiler itself is far easier to write.)

    But... we have lots more CPU time now. And code is being executed by a lot more people. Compilers have gotten a lot better, and it's quite possible to compress a lot of the checks into compile-time checks, rather than run-time checks. And even when it's not... those checks mean that it's that much harder to cause the code to behave in unexpected ways.

    Also of note: it is in fact possible to have bounds checking in C; it's been done at both the compiler and hardware level. C has other faults, however; I only alluded to two of them.

    115:

    No.

    As I said originally: with a lot of effort, you can work to make C code better. You cannot do anything about the lack of exception handling; you cannot do anything about the namespace issues. valgrind is a specific tool, that is not available on every platform. Test suites cannot handle every possible case -- go look up the Halting Problem.

    There are a lot of ways to make a C program more reliable. valgrind is one such tool. Static analysis is another. Dynamic analysis is a third. Working with the OS to do memory allocations very differently than is usually done is a fourth.

    I could continue listing hundreds of tools and tricks that are done to make C as safe a programming language as Ada. That's my point -- all that time and effort is spent by the developer, when it should be spent by the compiler writer. And at that point it's not C.

    The people who are responsible for the GNU TLS bug, the Apple "goto fail" bug, and the Heartbleed bug are not stupid people. They programmed in a language that encourages such bugs, rather than in a language that prohibits or discourages them. And that is why C is a lousy language for implementing applications or most libraries.

    116:

    Just to emphasize Sean's last; as an exercise, run valgrind and lint and every openly available code checker against the code base say required to run a Linux Apache PHP server and Wordpress. Down to the OS and all the support libraries. Say, CentOS 6.4.

    (Waits a few months...)

    Now, do you know anything more useful about the security of that system and code base?

    Actual results: ginormous mountain of false positives, real but harmless warnings, noise. Probably a handful of real security issues, too, but good luck finding them in the noise...

    117:

    Why is C generally thought to compile to better code than Pascal or Algol?

    I suspect that's no longer true, so long as you turn off enough of Pascal's features (eg, array range checking). That assumes, of course, that no run-time range checking is a good thing. A better question is "Why did C succeed to the extent that it did?"

    C's success was largely a matter of being the right language at the right time. Over the space of a very few years, both full and subset compilers became available that would target everything and run on almost anything: 16- and 32-bit minis, big iron, micros of all sorts of flavors, practically any OS around. Cross compilers were almost as common as native ones. In a very real sense, if you chose C as the language for your project, most of your code would run when TPTB wanted you to move it to different hardware. And there were a lot of free compilers, and semi-public servers for delivering those free compilers were starting to appear. By the early 1980s, if a hobbyist lived in a city of any size, they could download a C compiler and most of the standard library by making a local phone call and be in business.

    118:

    Define "better". Unless your value of "better" requires a small compiled object and short execution time, in which case you can still write assembler directly, I'd suggest that it's going to be looking for one or more of built-in error and/or bounds check handling, legibility (by which I mean that your replacement in years time should be able to understand your code) and upscalability (by which I mean that you should be able to take the snippet function illustrated in #114 and know that it will perform identically with an enterprise sized dataset as it does with the 20 or so values in your test data, then C is not "better" any more.

    119:

    That lack of bounds-checking and etc does have its advantages. C is set up as a tool, and it generally does exactly what the spec says it will do. And if that has implications we didn't think about, well, that's our problem to clean up.

    Which can be nice for wild-and-crazy code ideas. If you have some weird idea about how to set up arrays and indices and direct pointer access and stuff, you can just do it and see if it's really so clever. The language won't try to stop you and say "wait, that's not how we thought you would be handling your arrays".

    All of which is very unfashionable in these days of big dev teams and legacy code and open-source development. All for perfectly reasonable reasons. But there's something to that no-safety-net "here is a tool, this is exactly what it does, go make stuff with it" approach.

    120:

    C's success was largely a matter of being the right language at the right time.

    Richard Gabriel's series of essays on "Worse Is Better" does a very good job of explaining why Unix/C became so successful. The first essay is here (http://www.dreamsongs.com/RiseOfWorseIsBetter.html).

    121:

    You just have to accept that your code will have bugs. It's going to happen.

    If you write in C, you can write buggy code which will run fast and also likely will not take that much memory. If you're good at writing in C, you can do it fast without things like strong typing getting in your way.

    If you write in HLLs, you can write buggy code which will not run as fast and will tend to be bloated. It will have various features which are supposed to prevent bugs but which do not. Most of the bugs they do prevent will be found for C during unit tests, and since the unit tests must be run anyway the net result of the extra checking is to slow the process.

    However, HLLs may give you more and better pre-written modules you can use, that plug into the system better. When the particular modules you need are available and the bugs in them are subtle, you can gain a lot of time by using them. When you have to warp them to fit, you wind up with inefficient modules that are extra-buggy. When they're buggy to begin with you will lose more time figuring out that you shouldn't use them than you would have if they didn't exist.

    It's probably no longer worthwhile to become proficient in C if you aren't already. Employers tend to prefer to hire people who were trained quicker and who are cheaper, and competent C programmers give the impression of being worth more and so less employable. C code has the reputation of being harder to maintain. It's definitely more expensive to maintain because it requires programmers who know C. But cheap coders don't necessarily do adequate maintainance better even using HLLs with great open-source modules.

    If you do things for your own use, and you don't already know C, it isn't worth learning unless you really enjoy that sort of thing. The best I've seen for that is Python but I don't know everything.

    122:

    Another thing to note in favour of higher-level languages is that both "bugs per lines of code" and "lines of code per hour" tend to be fairly constant across languages. This means that if the HLL allows you to do what you want to do in fewer lines of code, other things being equal, the result will tend to have fewer bugs and take less time.

    123:

    You just have to accept that your code will have bugs. It's going to happen.

    True that. What I do not have to accept is that the result of an undiscovered bug is undefined by both the language and the programmer.

    124:

    Isn't that pretty much the definition of a bug?

    125:

    If you use that as your definition of a bug, the "undefined by the language" clause in #123 means that there are no bugs in Ada code. There is no circumstance in which a bug not handled by the programmer will not throw a language-defined exception (remember that Ada is an originally ANSI, later ISO standard) (although a programmer can obsfuscate those by having the code take the pre-defined exception and use it to raise their own standard exception).

    126:

    "Isn't that pretty much the definition of a bug?"

    It's a bug if it isn't what the customer wants.

    If there's a specification which calls for the unwanted behavior, then it's a specification bug. If the customer realizes that what he thought he wanted was wrong, then it's a customer bug.

    127:

    "Another thing to note in favour of higher-level languages is that both "bugs per lines of code" and "lines of code per hour" tend to be fairly constant across languages. This means that if the HLL allows you to do what you want to do in fewer lines of code, other things being equal, the result will tend to have fewer bugs and take less time."

    When you limit the number of things you can do, you limit the number of possible mistakes. In the limiting case you get a HLL which allows you to do only one thing, and the only way you can have a bug is to call the program when you shouldn't, or not call it when you should.

    Still, when you can depend on the system to do all the low-level stuff correctly, then you can concentrate only on the big stuff. Then you only make high-level errors.

    "I like to think I make a better class of mistake these days...."

    128:

    I've never programmed in Ada, but I don't believe that – even by the above definition – "there are no bugs in Ada code". Bugs aren't all syntax errors, arrays out of bounds, divide by zero, etc. – nor are those that aren't all caused by incorrect specifications. If I inadvertently take the cosine of an angle instead of a sine, it's not going to throw an exception (anything I can validly take the sine of, I can equally validly take the cosine of), and unless I happen to hit exactly 90° it's not going to cause an arithmetic exception. It will, however, cause me to get the wrong answer, i.e. it's a bug.

    Quite a lot of my bugs are of this type. The computer has done what I told it to do. It just hasn't done what I wanted it to do! (Many errors of this type are much subtler than my simple example. Misunderstanding the conventions of the geometry description of your detector can also have extremely undesirable consequences without causing a formal error condition, for example.)

    Also, for my application (analysis of particle physics data), short execution time is a requirement (the more events per second I can process, the better – in a big way if I were analysing LHC data, which I'm not), and I certainly am not going to write complicated analysis code in assembler, thank you very much.

    We also have a big overhead in terms of purpose-built tools for doing this and that. Moving from FORTRAN to C++ was a pain, and involved a lot of reinvention of assorted wheels. Moving from C++ to whatever the next generation language turns out to be probably won't be any easier (though with a bit of luck I'll have retired by then and won't have to).

    129:

    Correct.

    Note that there are in fact programming languages in which programs can be formally proven to be correct. But they are hardly ever used, mainly because the first part of it is to generate a specification for the program in a way that can then be used for the proof.

    130:

    Er, using the wrong trig function is probably a design error, and if not then it certainly is the sort of coding error that a bench test or a test data run should show up. If it's not showing up, then maybe your validations are inadequate.

    I'd also suggest that, particularly around the axes in a Cartesian co-ordinate system, a wrong trig function could cause you to fail a bounds check.

    131:

    There's the Błędów Desert in Poland.

    Per Wikipedia it's 32 km2 (12 sq mi) in area. Which is tiny by today's standards for training. Fort Hood in the US is about 150 times that size. And the fort doesn't include all the dry land around the area. :)

    I didn't realize till now that Europe was so "wet". Aren't there areas of Spain that would be considered desert?

    132:

    IIRC a "Semi-desert" has less than 20" = 500 mm of rain p.a. And a Desert has less than 10" = 250 mm.

    Curious fact. There's atiny strip along the Essex/Suffolk coast that is, technically, Semi-desert. Though not in the past 6 months!

    133:

    Aren't there areas of Spain that would be considered desert? Yes, at least if we're including the Canary Islands. http://en.wikipedia.org/wiki/Canary_Islands

    134:

    I don't think that can be a sufficient rather than merely a necessary condition for an area to be a desert or semi-desert, as it doesn't allow for the effects of "exotic rivers", i.e. rivers like the Nile, Euphrates and Tigris that bring in water that fell as rain or snow somewhere else. That's probably relevant for your Essex/Suffolk coast case, too.

    135:

    "We already have effective processes and systems in place for achieving the objective of "keeping dangerous substances away from casual users" - you go and have the medical experts look you over and based on that you get a prescription. This system works quite well."

    Sorry, I think I'm missing something here - are you arguing that people would stop using crack, heroin etc illegally, if we just didn't enforce the laws against them? Or that if we allowed doctors to prescribe them, no one would try to use them without a script?

    I don't think that passes the plausibility test - even if we set aside stuff like crystal meth, oxycodone is one example of a prescription drug that's had some raging abuse problems, which have generated a lot of work on alternate forms of oxy that are harder to abuse.

    "In the case of stopping drugs, the initiative is clearly driven by religious nuttyness: There is NO evaluation of risks versus potential usefulness ... Potential medicine cannot be even tested just because "they might be fun to someone, somewhere" "

    I fear you have been misinformed.

    Research into controlled substances goes on all the time. It can be a big regulatory headache, but heck, if you're doing medical research involving effects on human beings, regulatory headaches are probably something you're already used to.

    I'm not much familiar with regulatory schemes outside the US, but the US ratings (Schedule I, II, ... V) are based exactly on risks vs medical use. You might not agree with the conclusions they come to (with marijuana as the standard case there ...) but I think you're a bit off track if you don't think there's any evaluation.

    "The same with Cannabis; It works against cancer pains. If someone has cancer, it goes to the bones, this really hurts a lot, the pain even defeating morphine - WHY is it critically important for society that this person, who will probably die soon anyway, do not "take drugs"?!"

    Well, in the case of cannabis one practical objection is that it's available in pill form, by prescription, and has been for nearly 30 years. Used routinely for this purpose.

    http://www.marinol.com/ http://www.fda.gov/ohrms/dockets/dockets/05n0479/05N-0479-emc0004-04.pdf

    From a strictly medical perspective, a pill is much more desirable than a regular marijuana cigarette - consistent purity, known dosage, tested stability and storage conditions, easier to control, pharmaceutical-grade quality control, patient doesn't have to inhale a bunch of smoke, etc.

    Works quite well in most cases, although in practice it's generally most useful as an anti-nausea drug (regardless of whether used as a pill or smoked). (Don't discount anti-nausea, it can be really critical for cancer patients.)

    Not to get too dark, but if pain is severe enough to overwhelm morphine, I suspect marijuana will be of pretty limited benefit; it's not mainly a painkiller. But only speculating.

    Sure, there will always be patients who maintain that they feel better with one form over another - it can be very subjective. Some work was being done on comparing the inhaled vs ingested forms years ago, as I recall, not sure how that came out (never followed it closely).

    All of that might raise the question of why there has been such a passionate "medical marijuana" movement. To which I will suggest that while it is almost always a good idea to keep a close eye on one's government, it is an even better idea not to place too much trust in political special interest groups. Such as the "legalize medical marijuana" folks.

    "This official sanctioned attitude is that any substance that have recreational use is 100% Evil, because of that property only."

    Again, I fear that you've been misinformed - see the cannabis pills above, or morphine is another good example. Or tranquilizers. Or stimulants. Figuring out how to get the benefits, while controlling the risks and harms, is an ongoing effort by lots of people. Imperfect people, who are not saints (heck, some of them are politicians) and who can and do make mistakes. But it really is a hard problem, with lots at stake.

    136:

    Just to stir up some trouble, in a way that is more relevant to our most accommodating host's original post ...

    One of the obvious scenarios for Edward Snowden was always that he was being manipulated (or employed, or duped, or just helped out) by the KGB or some other hostile nation-level player. He did flee to Russia with a hard drive full of NSA secrets, he did release craptons of methods & capabilities information (which as OGH pointed out earlier, is EXTREMELY important), and he did leave massive disruption of USINTEL in his wake - maybe it's great for US civil liberties, but it's an enormous victory if you're the KGB (sorry, FSB now) or the Chinese intelligence service.

    The point that I am dancing around is this: for practical network security purposes, what difference would that make ?

    In other words, if we view the whole thing as a hostile intelligence operation, that Snowden and co cleverly spun as an act of righteous civil disobedience, does that change any of the features we'd want to see in the greatly-improved network security that OGH proposed at the beginning of all this? Or is the ideal answer to what we need pretty much the same?

    137:

    He said "As for Snowden's leaks it appears to me, that too much crap is classified secret and Top secret and so anyone with those clearences have access to too much crap."

    Sir, you should probably read the security briefing materials. (You can find them on the Web if you dig a little.) The first thing they tell you is that there are two pieces: appropriate level of clearance, and need-to-know.

    One most emphatically does not get access to some particular piece of SECRET (for example) information JUST by having been granted SECRET clearance. One must also need to know that information, in order to do one's day-to-day job. (Recall that Bob did not find out about basilisk guns and the surveillance camera mod program until he actually needed to know about them.)

    As a general rule, information is classified at the level it is classified for damned good reasons. The reasons may not always be obvious. (A friend of mine once told me that he'd had "a classified number" of a certain device, produced by an adversary nation, in his lab. That he had them was not classified. How many he had was highly classified, because it would give the Bad Guys a pretty good lead on where and how he got them, and burning a source is usually a bad idea in the Great Game.

    138:

    Sir, you missed a couple of very important systems.

    Plessey 250 - Built for a military telephone switch project, the Plessey 250 hardware design took as a fundamental design principle that EVERY array access must be subscript-checked, in hardware, without exception. The machine was a howling success.

    Burroughs B5000 - The Burroughs B5000 series was the first real descriptor machine (not quite a full capability machine, but almost). EVERY access got checked, in hardware. If your program had not been explicitly granted a descriptor for some region of memory (some data object), your program was not able to access that memory. Period. Also a howling success, and I THINK that follow-on machine are still running Burroughs Extended Algol code today.

    You also missed a very important comment, from Tony Hoare, I think, along the lines of "Running with subscript checking enabled during testing, and then turning it off for production, is like wearing a life jacket inside the harbor, and then taking it off when you venture out into open ocean." Having sailed a small boat (and a 105' schooner, once) on the Pacific Ocean, and in protected harbors adjacent, I appreciate this one.

    139:

    Water flowing past in a river does not ameliorate a desert, except through human intervention, soes it? And I have quoted what I believe to be the offical definitions. HERE is the wiki defintion: Deserts generally receive less than 250 mm (10 in) of precipitation each year.[6] Semideserts are regions which receive between 250 and 500 mm (10 and 20 in) and when clad in grass, these are known as steppes. OK?

    140:

    Both in this & your two sunsequent posts, you do seem to be vigorously toeing the "official/party" line, don't you?

    In the UK, several people, who grow their own Cannabis for medical purposes (pain relief for chronic spinal, usually - NOT cancer) have been granted personal exemptions by the local (Magistrates') courts, after local nosey-parkers or zealous police-prodnoses have made their lives even more misearable.

    Snowden only went to Russia, because everyone else was too cowardly to give him refuge, IIRC.

    141:

    Ok, Greg's already said this twice, so hopefully someone else saying too will convince you. The definition of "desert" is "land receiving less than 250mm of liquid precipitation (so you have to melt snowfall) in a year".

    142:

    I think you have missed something here. Doctors will only prescribe $drug to people who have a medical need for it. I think what we're proposing here is more by way of a "fitness test" where anyone who applies for and can pass that test (at periodic intervals) gets a licence to visit facilities where they will be dosed with their narcotic or psychadelic of choice.

    That handles most of your objections by opening up the supply line, and having done that also removes most of the incentives to criminality by markedly reducing the street price. I'll answer your other objections about opiates and cannaboids separately (well maybe).

    143:

    Agreed, note the cases where Charlie describes something as being "SECRET $Codeword". This explicitly means that if you have not been "read on" to the $Codeword access list, then you are not cleared to read a document carrying $Codeword.

    This applies even if your clearance is higher level than the document, for example, even if you have an ATOMIC SECRET clearance then if you have not been read on to codeword ANGRY GERBIL you are not cleared to read a document classified "CONFIDENTIAL ANGRY GERBIL".

    144:

    You seem to have some knowledge of pharmacy, pharmaceuticals and related fields. Such being the case, why are you apparently unwilling to admit that people can, and do, build up tolerances to opiates, so that higher doses are required to achieve a given effect, even if that effect is purely analgesic?

    145:

    Our system for keeping government secrets is fundamentally bad for us, though it may be in some way necessary.

    For example, we have a whole lot of research on hydrogen fusion which may provide important information toward fusion reactors. Most of it is secret because it might also help foreign nations develop hydrogen bombs. When balancing the two possibilities, which do our censors overemphasize? The latter, by far.

    It's hard to measure how much our secrecy hurts us versus our enemies. But it's definitely a negative-sum game.

    146:
    Sorry, I think I'm missing something here - are you arguing that people would stop using crack, heroin etc illegally, if we just didn't enforce the laws against them?

    No, the point was that they could have been put under prescription instead of creating the laws against them. As you point out, oxycodone abuse hasn't broken the prescription system.

    147:

    I know perfectly well that that is what the official definition says. What I'm saying is that, to the extent that it stops there, it is thoroughly defective. It's a "if the law says that, sir, then the law is a ass, sir, a ass" sort of thing. Because areas that qualify as deserts under that definition but that have exotic water resources are and never have been, well, deserted, i.e. abandoned by people.

    148:

    Water flowing past in a river does not ameliorate a desert, except through human intervention, soes it?

    Actually, no, any more than the wild life that flourished in the Nile, Tigris and Euphrates needed human intervention. But, people being people, when they found those resources they applied human intervention to get even more out of them.

    But the point I was bringing out wasn't that that couldn't be the official definition of a desert but rather that the official definition fell short of being fully informative by not allowing for those (natural) things that made a difference in certain areas.

    149:

    Oops - "aren't", not "are".

    150:

    Sure, tolerance can be a problem, with opioids and various others, but I'm afraid I can't find where I "refused to admit" that? I don't think it's come up before in this thread, not sure how it ties in.

    Unless you're assuming that the reason for using marijuana rather than opioids, is tolerance to the opioids? I took the original phrasing ("it goes to the bones, this really hurts a lot, the pain even defeating morphine") to mean that opioids just weren't effective, rather than anything as specific as tolerance.

    151:

    Ok then, write us what you think is a better definition, and see if we can kick holes in it.

    152:

    There are cryptosystems that are secure against fast factorizing: https://en.wikipedia.org/wiki/McEliece_cryptosystem

    153:

    Maybe I've misread you, but I thought that you were claiming that canaboids could not possibly be more effective than opiates in individual cases. I was citing one of the more common (and at least unofficially medically admitted) reasons why this is not so.

    154:

    @3lucid: McEliece is mentioned as secure against quantum computing, rather than against fast factorisation. If integer factorisation is your concern, you can use elliptic curves; they're also pretty handy for making message sizes smaller, the problem is that there are a number of Certicom patents on them which don't expire until 2017.

    Much of what Snowden leaked appears to have been the NSA internal Wikipedia for analysts: training material indicating what sort of things can be done, without much detail on how they're done, and on what kind of people can be targetted, without much detail on how they're targetted. This is clearly a really, really useful resource to have in place to keep your analysts up to speed and to help with the problem that your institutional memory walks out of the door every night and might sometimes be hit by a bus.

    155:

    I think you have missed something here. Doctors will only prescribe $drug to people who have a medical need for it.

    That's how the NHS works. Officially and, usually, unofficially. But it's not how private medicine (customer paying variety) works. Or even how your regular GP works here in NHS-land (Scotland, that is) if you ask for a private prescription. At that point they'll write you one for anything you have a medical need for, as long as it's not obviously going to damage you or get them struck off. Alas, most of the fun stuff is damaging or illegal, so this won't get you very far over here -- but for what it's like elsewhere, look no further than the sorry history of Michael Jackson.

    156:

    Note that (groan) the Tories are totally re-jigging the way the civil service handles classification levels this decade. In such a totally brain-dead manner than they had to go back to square one because they'd taken no account of the difference between the probability of unauthorized disclosure being damaging and severity of damage resulting in event of unauthorized disclosure.

    (Random not-terribly-good illustrative example: Disclosing back-handers paid to the Minister's dog-walker is undoubtedly damaging, and said brown envelopes stuffed with banknotes can probably be kept secret easily, but the severity of the resulting damage is trivial. Whereas knowledge of the precise route the dog-walker walks each day, so that a sufficiently motivated Trrrrst might intercept the ministerial pooch and strap a bomb to it, thereby assassinating the minister, is hard to keep secret -- J. Random Trrrrst could in principle deduce it with the aid of a watch and a notepad -- but the severity of the resulting damage would be extreme.)

    Revised guidelines here (effective as of about five weeks ago and counting).

    157:

    J Thomas wrote: For example, we have a whole lot of research on hydrogen fusion which may provide important information toward fusion reactors. Most of it is secret because it might also help foreign nations develop hydrogen bombs. When balancing the two possibilities, which do our censors overemphasize? The latter, by far.

    This is a decidedly optimistic view of where that line is drawn. Enough about ICF and fusion weapons design is declassified that it would take an WMD program idiot to be unable to build one, given sufficient time and bugdet.

    158:

    For all the data the NSA has collected on us, it hasn’t protected us against cyber hacks from Chinese and Russian Internet pirates. More often than not the last residual trails seem to lead to IP numbers in these countries. These pirates have hacked our e-mail accounts, social security numbers, and credit card numbers … our virtual identities. All the NSA has done is given away more data through it’s own bureaucratic incompetence. Some call Snowden a hero, others call him a trader, he was just an accident waiting to happen.

    The irony is that Snowden has taken refuge in one of the countries that seems to be a harbor for Internet pirates.

    159:

    You might think that; I could not possibly comment.

    In any event, I was explaining how the system extant in "Tales from the Laundry Files" which occurred prior to the current calendar year worked.

    160:

    Since you cite the Michael Jackson case, his personal physician was charged with involuntary manslaughter, despite only having issued scripts for prescription medications. Wikipedia doesn't report the outcome of the case, I can't remember having heard it, and that being the Yousay it could still be ongoing.

    161:

    I.. really do not understand why anyone at all uses any encryption other than one time pads. Making very large ones with current tech is very cheap, and while ensuring a secure manufacture and distribution takes some effort, it isn't that obnoxious. If you have money riding on your information security...

    162:

    Try encrypting the contents of this blog entry (OP and the present 162 comments) manually. Now do you see why one time pads are only actually of any use for short (say SMS length) messages?

    163:

    I.. really do not understand why anyone at all uses any encryption other than one time pads.

    The problem is key distribution. You have to physically transfer the key (offline -- online transfer won't do) and only ever use it once. Use the encryption key more than once and the OTP becomes trivially breakdable. So any/all suspected OTP messages will be retained in perpetuity by any intel agency that sees one, against the day. (Old KGB OTP messages were being cracked thirty years later simply by throwing raw computing power at corpuses of dusty transcripts of Number stations. Because someone who didn't understand crypto skimped on the keys and reused one in the name of convenience.)

    You can't distribute the key via internet (how do you know the routers aren't compromised? "Use ssl" -- yes, but if a heartbleed-like bug shows up years later, all your OTP keys then belong to whoever recorded the raw SSL session). So it's back to doing it all by hand.

    Upshot: OTP is fine for a low-bandwidth or high latency network carrying critical data -- e.g. spies sending photos of secret gummint missile technology back to SPECTRE HQ -- but not for the internet.

    164:

    Somewhere I came across the cost estimate for supplying one OTP teleprinter link (I assume using UCO-5 or ROCKEX) and it was 5k GBP per annum, back when that would buy you a couple of decent houses.

    More recently, I read that one of the uses of the COLOSSUS (presumably post-WW2) was to analyse the raw OTP punched tape such that any slightly statistically non-random ones could be discarded before issue. (I wonder if that was the reason the public were not to be allowed "hands-on" access to COLOSSUS? Another odd restriction was that ROCKEX machines were only allowed out to the Diplomatic Wireless Museum (Gone!) at Bletchley Park under the same "no public access" rule, despite the fact they were only two tape readers and some relay XOR logic = that may just have been paranoia, of course.)

    The checked tapes were respooled (tightly) and then printed across the side with (presumably) a serial number and suitable design so that it would be obvious if they had been tampered with in transit. (Obviously before C-T scanners existed!)

    After that you send them around the world via diplomatic bag or equivalent (and monitor for any suspicious delays in transit).

    165:

    McEliece is mentioned as secure against quantum computing, rather than against fast factorisation.

    There's a known algorithm for fast factorization with a quantum computer, called Shor's algorithm. The desire to run this algorithm for crypto-breaking purposes is probably the the main reason for interest in quantum computing.

    166: 157

    "Enough about ICF and fusion weapons design is declassified that it would take an WMD program idiot to be unable to build one, given sufficient time and bugdet."

    Yes, but lots more is secret. I think. I'm partly going by a discussion more than 10 years ago from someone at the University of Rochester Laser Lab. They got defense industry funding for their laser fusion experiments, and then they tried to publish, and they kept trying to reveal results that duplicated classified results. The quote went "This lab is like a pimple on the ass of the military nuclear program, and we generate more alert notifications than everything else put together."

    I have the strong sense that however much information has already been revealed about fusion bombs, there's a lot that has not been revealed that could be useful for fusion reactors. But I'm no expert, and I don't really know what the secrets are or how useful they would be.

    167:

    "There's a known algorithm for fast factorization with a quantum computer, called Shor's algorithm"

    Is it as good as advertised ? I found this recently (4 year old, but by someone who knows about this kind of stuff), he calls this type of algorithm "Galactic", i.e. not useful for practical use, the runtime maybe logarithmic, but it is of little use if the size of the program or the computer is exponential. It's not uncommon for scientists to gloss over this kind of detail when trying to get £££ for fundamental research from political, military or quasi military brass. Examples include nuclear fusion, the CERN, etc.

    http://rjlipton.wordpress.com/2010/10/23/galactic-algorithms/

    "The thought behind it is simple: some algorithms are wonderful, so wonderful that their discovery is hailed as a major achievement. Yet these algorithms are never used and many never will be used—at least not on terrestrial data sets."

    and Wikipedia : ( http://en.wikipedia.org/wiki/Shor%27s_algorithm )

    The bottleneck

    ...

    "Reversible circuits typically use on the order of n^3 gates for n qubits. Alternative techniques asymptotically improve gate counts by using quantum Fourier transforms, but are not competitive with less than 600 qubits due to high constants."

    ...

    168:
    Much of what Snowden leaked appears to have been the NSA internal Wikipedia for analysts: training material indicating what sort of things can be done, without much detail on how they're done, and on what kind of people can be targetted, without much detail on how they're targetted.

    I will point out Snowden probably has more detail than we've seen; operational details are being redacted by those publishing the stories, as they're more interested in the public debate. This reliably sends up howls from cypherpunks as each story is released without technical facts crucial to plug the hole.

    169:

    J Thomas wrote: Yes, but lots more is secret. I think. [....]. But I'm no expert, and I don't really know what the secrets are or how useful they would be.

    I should probably interject here; you want to google my full name (George William Herbert), and I am an open-source nuke technology expert.

    I have a biased opinion, but reasonably educated. My read on the best ICF data is that it and a few other classified corners of things not directly involving pulsed prompt fast neutron supercritical assemblies is roughly this...

    Based on now well known physical principles, a mere thermonuclear secondary is roughly a physics Masters Degree level difficulty program with open source information.

    Making very efficient thermonuclear bombs (2-3 KT/kg) is a PhD level difficulty program program with open source information.

    Making the most supremely efficient thermonuclear bombs (5-6 KT/kg) requires more than that, including the functional ability to really grok how interstages work and for example what Fogbank really does for "compact" weapons. Very very few people outside bomb programs can put that together. I will admit that my internalization of the plasma and photon gas equations is not at this level; there's a difference between "That's a Marshak Wave" and "Dude, isn't that beautiful coupling between the...".

    There's nothing in the concepts or math that eludes me, but it's hard enough that it doesn't pop out as easy.

    A lot of people think that all bomb design is in the latter difficulty category. The design of the first bombs can be recreated with a pocket calculator and spreadsheet or enough pads of engineering paper, now that we know they're possible...

    170:

    I.. really do not understand why anyone at all uses any encryption other than one time pads.

    The problem is key distribution. You have to physically transfer the key (offline -- online transfer won't do) and only ever use it once.

    Yesbut, that's a matter of resources (read money). If you're a big, rich corporate entity (USG, Toyota) where communications need to be carried out between a smallish number (dozens, low hundreds) of fixed sites that have good physical security, then the key distribution problem largely reduces to buying airplane tickets for pairs (two-person rule) of couriers(*). Or doing it by bizjet, again with teams rather than single couriers. Depart Crypto Central in a guarded SUV with a some 32 GB memory cards in a locked attache case, handcuffed to one of the couriers if desired, go to the airport, fly to destination, meet another SUV, go to the secured communications facility at the destination.

    (*) A former spook told an amusing story about having a meal with a crypto courier whose government was too cheap to pay for two tickets. An understanding was reached and the courier checked his briefcase while they enjoyed a leisurely dinner. Both parties went away well satisfied with the evening's outcome.

    171:

    Eeck, that classification system is 'orrible.

    No need to mark at the 'official' level - basically means everything gets treated at that level (even the lunch menu). Then the next step is secret - which is generally major hassle and nobody want to have to bother with.

    The UK having restricted and confidential always had major plus points over the US system in that there was more graduation of seriousness possible whereas the yanks had a huge jump to the hassle factor associated with their first rung. This is even worse than the US system.

    Let me guess, this was so that everything, by default, has to be treated as a secret? It's certainly not been devised with any kind of eye towards practicality.

    172:

    Wikipedia does have an article on the trial of Conrad Murphy, Michael Jackson's personal physician. The jury convicted and the judge sentenced Dr. Murray to four years, but he was released early last October due to California's prison overcrowding problem. I'd note that that Dr. Murray was at the scene when Jackson died, so "only having issued scripts for prescription medications" is not an accurate description of the doctor's actions.

    173:

    George Herbert, I'm happy to hear from a real expert.

    Making the most supremely efficient thermonuclear bombs (5-6 KT/kg) requires more than that, including the functional ability to really grok how interstages work and for example what Fogbank really does for "compact" weapons.

    It also requires testing to find out how right you are. Without testing against reality, you can grok all sorts of things and believe you see beautiful couplings, and you might be wrong.

    Very very few people outside bomb programs can put that together.

    Definitely.

    It sounds like your point is that it's possible to make a thermonuclear bomb that works, with rather little knowledge. And it takes a whole lot of knowledge and skill to make truly efficient bombs.

    The conclusions I'd draw from this are that first, enough information is already out for anybody who has sufficient resources to build thermonuclear weapons that will probably work. Second, there is a lot of detailed information that would be useful to people who want to build weapons, which is still secret. It would affect not the fact of their nuclear weapons but the yield.

    My claim, which could possibly be wrong but I believe it, is that Rochester LLE was doing experiments shining a whole lot of laser light onto pellets of frozen deuterium-etc and watching the results in terms of fusion, and they wanted to publish their results to assist in fusion reactor research, but they had problems from DoD because the same data could be used to assist fusion bomb research.

    My own opinion is that if we get practical fusion reactors it could reduce the chance of nuclear war, even though various nations would find it easier to build nuclear bombs. I think that the possibility this information would increase the likelihood of practical fusion reactors outweighs the possibility that it would increase the likelihood of nuclear war.

    What do you think?

    174:

    Plus, of course, the err... widespread susupicion" that the Jackson family & lawyers hung Dr Murphy out to dry, quite deliberately & as a matter of policy ... So that they could blame someone else. Nothing at all to do with MJ being totally loopy, of course.

    The words: "Show Trial" come to mind.

    175:

    No It is deliberately designed to frustrate FoI requests. It's an internal Civil Service revenge move. The tories have rolled over (A la "yes Minister") to an admintistrative counter-attack. For another example of this, look at the Treasury's 40-year programme of deliberately trying (& often succeeding) in trashing our railways.

    176:

    The main thing is that large amounts of storage is cheap, and any entity which needs secure communications almost certainly has in person meetings on a regular basis anyway - The cheap way of setting up a one time pad link is to hand a terabyte drive over to whichever managment/engineering/whatever team is visiting that branch next. Heck, arguably, that is more secure, because it makes it harder to spot a pad handoff. You dont have to do this very often, because a terabyte pad will support communications for a bloody long time, and it isn't like a toyota engineer is going to object to being asked "Hand this clay tablet (old-school seal!) to yoshi while you are there?" at short notice.

    177:

    Thanks; I'd expected that to link from the entry on MJ himself, and it didn't seem to (at least in the most reasonable place). By "only issuing scripts for prescription meds" I simply meant that he had not committed any single act that was illegal in and of itself (although the totality of his actions might constitute negligence [I'm not qualified to comment on that in detail]).

    178:

    I think you still have to use the system in a secure manner (starting with machines that are never connected to the internet, and that's a hassle in itself these days. Trust me on this if you don't have a genuinely non-internet machine), and continuing to how you transmit messages from A-B. I'd suggest that you need the same courier pairs system, and you've now built in anything up to 24 hours latency between A raising a message, and B starting to decrypt it.

    179:

    econd, there is a lot of detailed information that would be useful to people who want to build weapons, which is still secret. It would affect not the fact of their nuclear weapons but the yield.

    Not the yield -- the efficiency.

    There's a huge difference between being able to build a 1Mt hydrogen bomb that weighs ten tons and a 1Mt hydrogen bomb that weighs 400 kilograms. The former can only be delivered by something the size of a B-52, an R-7 (Soyuz launcher), or a shipping container. The latter is something you can stick under the wings of a second-hand F-16 (paging Pakistan Air Force, Pakistan to the white courtesy phone) or on top of a Vega-class ELV (read: an I-can't-believe-this-isn't-a-Minuteman-III manufactured in Italy -- lest we forget, that Rocket Science stuff is 1950s technology).

    180:

    For that matter, the 400kg version could be loaded into most busjets (maybe in one piece, maybe needs assembling inside) or under-wing on a Falcon 20 if you can source Super Mystere pylons.

    At this point, if you can achieve an aerodynamically safe release, it becomes possible to perform 3 strikes, 2 by bombing and a suicide strike, with said Falcon 20...

    181:

    you've now built in anything up to 24 hours latency between A raising a message, and B starting to decrypt it.

    Relevant concepts can be found at

    https://en.wikipedia.org/wiki/Sneakernet

    and

    https://en.wikipedia.org/wiki/Air_gap_%28networking%29

    So, once the OTP terabyte drives have been delivered and are in place in the Secure Isolated Rooms (I made that up) at both ends of the transaction, the additional latency is that introduced by sneakernet transport of (say) SD cards between the Internet Rooms and the SIRs. With, of course, scrubbing of the cards, perhaps on a separate computer in the SIR, to make sure nothing evil has snuck onto them from the Net. Added latencies should be in the less than 30 minute range.

    SIR(A) - Sneakernet - IR(A) - Internet - IR(B) - Sneakernet - SIR(B)

    More levels of paranoia could be introduced -- e.g. use a noncomputer machine for OTP XORing in the ISR -- but I suspect that a never-connected (preferably nonconnectable) PC would do in most situations.

    182:

    You've still got the stage where you're sending obviously encrypted transmissions over the web your way. If you remember, that's what we're trying to avoid doing.

    183:

    Not the yield -- the efficiency.

    Yes, you're right.

    There's a huge difference between being able to build a 1Mt hydrogen bomb that weighs ten tons and a 1Mt hydrogen bomb that weighs 400 kilograms.

    If it scales down, a single 40 Kt bomb can ruin your whole day too.

    Anyway, my claim is that getting practical fusion reactors is considerably more valuable than delaying third-world high-efficiency fusion bombs.

    184:

    You've still got the stage where you're sending obviously encrypted transmissions over the web your way. If you remember, that's what we're trying to avoid doing.

    Actually, no, I don't remember that. Could you provide a pointer to the message that talked about such a concern in connection with OTP, please? In any case, I doubt that embassies would worry too much about being caught sending/receiving encrypted traffic, since they do it all the time anyway. Probably the same applies to many corporate offices.

    The point, such as it was, is that there are significant applications in which the disadvantages of OTP (key distribution, latency) aren't a show-stopper. And, if done properly, OTP replaces cryptographic concerns with physical and personnel ones, which, again, would be a concern no matter what crypto system was used.

    185:

    Losing my math mojo, what with all the weeks I've been out seeing the sights, but -- can you use an OTP to send an OTP? From what I understand of the procedure, it's just XORing bits (with the operation being its own inverse) of message with bits from a randomly generated string. So wouldn't it be safe to send a fresh pad via a 'used up' old one? Or are cryptographic attacks really just that effective?

    186:

    I'm not 100% certain, but I think that would run into essentially the same problem OGH pointed out in post #163.

    If one-time-pads are connected in this way, using each one to encrypt the next, then an attacker that has all the (encoded) transmissions of OTPs, and manages to decrypt (or steal) any one of the pads, would then be able to decrypt all of them.

    187:

    If your OTP lets you encode one byte of data for one byte of OTP, and you are careful not to re-use our OTP, then transmitting a new OTP by using up the old one at the same rate will not help.

    188:

    And really there is just no point. A one terabyte harddrive costs chump change. Fill it with a pad, and that should cover your com needs for fracking ever unless you insist on high-definition teleconferencing every friday.

    189:

    Yes, information is information and encrypting an OTP still uses up bits; I thought that was implied in my comment. But Charlie says that using an OTP pad more than once is trivially breakable on low-entropy messages. Without knowing the details, the mere fact that this is so leads me to believe that, while breaking the crypto may be computer-intensive, it certainly won't be 'hard'. OTOH, OTP's have extremely high entropies -- the highest that is feasibly possible in fact. So I'm wondering what the flaw in this reasoning is, i.e., why you can't send more usable OTP's via the original OTP.

    190:

    The entire point of going with one time pads is that it frees you from engaging in an arms race of mathematical cleverness. It's unbreakable, and the practical burden of moving pads just isn't that great. Trading in the perfection of doing this right in order to get some minor level of convenience back.. No. Don't even waste time thinking about it. One time means one time.

    191:

    Anyway, my claim is that getting practical fusion reactors is considerably more valuable than delaying third-world high-efficiency fusion bombs.

    I think we can safely say that anyone who can scratch-build high-efficiency fusion bombs doesn't qualify for the "third world" epithet any more: at worst, they're developing-world (where the future is unevenly distributed -- think China or India, with space programs and military-industrial complexes and a dwindling but still-extant population of peasants).

    But: practical fusion? They don't inherently prevent someone capturing neutrons to breed 238U into 239Pu by blanketing the reactor, so they're not an intrinsic anti-proliferation tool. The reactor structure is subject to secondary activation, so they're not a magic bullet solution to the high level waste problem either. Meanwhile fission reactor melt-downs turn out to be vastly over-stated in terms of hazard (albeit somewhat understated in terms of probability).

    The real question is, can we do fusion cheap enough for it to be economically viable? And I'm not sure we can, because the existing industry fission incumbents will lobby to ensure that safety standards designed for fission reactors (to avert the perceived risk of meltdowns and proliferation) will also be applied to fusion reactors, thus raising their price into the stratosphere -- and there's no obvious reason why they shouldn't.

    192:

    The entire point of going with one time pads is that it frees you from engaging in an arms race of mathematical cleverness. It's unbreakable...

    Yesbut, you have to be really, really careful that your process for generating the OTP doesn't totally depend on mathematical cleverness (e.g., pseudorandom sequences), otherwise you're back into the same situation as before. Pseudorandom sequences do seem to underlay serious cryptosystems in use today but, still, they're generated by deterministic algorithms that depart from fairly small initialization vectors. Hence "Pseudo." That bothers me: someday some bored graduate student sitting out the winter in Novosibirsk might figure out the deterministic part.

    Stirring pseudorandom cleverness into the mix, prudently done, doesn't hurt, but you should really have some physically random processes XORed into the mix.

    193:

    Charlie: There's a huge difference between being able to build a 1Mt hydrogen bomb that weighs ten tons and a 1Mt hydrogen bomb that weighs 400 kilograms. The former can only be delivered by something the size of a B-52, an R-7 (Soyuz launcher), or a shipping container. The latter is something you can stick under the wings of a second-hand F-16 (paging Pakistan Air Force, Pakistan to the white courtesy phone) or on top of a Vega-class ELV (read: an I-can't-believe-this-isn't-a-Minuteman-III manufactured in Italy -- lest we forget, that Rocket Science stuff is 1950s technology).

    Nobody's horribly worried about Italy; Post-WW-2 they're about as non-interventionistic as it comes in Europe and they are intimately tied in to the NATO and EU structures, etc. A weapons program based on a retired oil rig off the Kenyan coast is a bad Bond movie villian, not a menacing Italian nationalist plot...

    However...

    There are those in Asia slowly having seizures over the (decades-old) Japanese Lambda/Mu/J-1/Epsilon solid fuel space launchers series. The M-V / MX comparison caused some screaming fits in normally quiet private places, and Epsilon is a M-V derivative. J-1 (now retired) at least was evidently not easily MIRV/MRVable. CLPS on Epsilon added a certain degree of quiet screaming that way...

    194:

    scentofviolets wrote: Yes, information is information and encrypting an OTP still uses up bits; I thought that was implied in my comment. But Charlie says that using an OTP pad more than once is trivially breakable on low-entropy messages. Without knowing the details, the mere fact that this is so leads me to believe that, while breaking the crypto may be computer-intensive, it certainly won't be 'hard'. OTOH, OTP's have extremely high entropies -- the highest that is feasibly possible in fact. So I'm wondering what the flaw in this reasoning is, i.e., why you can't send more usable OTP's via the original OTP.

    If you use an unused section of pad A to send pad B, sure. OTP payloads are just as OTP encryptable as anything else.

    The problem is, if you re-use sections of A to encrypt B and transmit it.

    It's mathematically the same (as I understand it) if you use A to encrypt message 1, then reuse A to encrypt message 2 (which, as messages are low entropy, makes the pad easy to derive from the encrypted payload); and if you use A to encrypt 1, reuse A to encrypt B and transmit it, then use B to encrypt 2.

    A and B end up essentially information entangled, and if someone intercepts A-encrypted-1, A-encrypted-B, and B-encrypted-2 then you can equivalently reverse A (and B) out of that data.

    195:

    J Thomas wrote: If it scales down, a single 40 Kt bomb can ruin your whole day too.

    A single 40 Kt bomb can be a (lightly boosted) compact gun-type Uranium bomb; see the W33. 8 inch (20 cm) diameter, about 32 inch (80 cm) long, 110 kg / 240 pounds.

    That does NOT need a complex materials program to build. The entire design, engineering, and test program short of a live HEU yield shot can be done in a garage...

    196:

    Pseudo-random number generators are for people with an inexplicable fear of soldering wire. Pointing a geiger counter at the nearest wall works fine.

    197:

    It's mathematically the same (as I understand it) if you use A to encrypt message 1, then reuse A to encrypt message 2 (which, as messages are low entropy, makes the pad easy to derive from the encrypted payload); and if you use A to encrypt 1, reuse A to encrypt B and transmit it, then use B to encrypt 2.

    This is the sort of thing that bugs me when my students do it: you've just rephrased my question, but in the form of an answer. If you really know why, post the answer; if you don't, posting stuff like this doesn't help at all (and lowers my opinion of you to boot).

    198:

    The tricky part is the gas chromatography to separate 235UF6 from 238UF6. That takes some serious space, since the two gasses diffuse at very nearly the same rate and relatively small amounts of 238-U absorb neutrons and hinder the chain reaction.

    Gas centrifuges can do the job with a lower footprint, but are much more complex to make and to operate.

    (Most of that was addressed more to other readers on the thread than to George, who I'm fairly certain has a solid background on nuclear physics but not necessarily on isotope separation).

    199:

    For any N bits you encrypt with a one time pad, you use N bits from that one time pad. So sending one OTP via another OTP doesn't increase the absolute number of OTP bits available.

    200:

    Yeah, but: my point wrt. Italy was that the threshold for entry into the wonderful world of rocketry is no longer superpowerdom, or even being a front-rank second tier power like France/UK/Japan -- Italy did it, in a somewhat absent-minded manner as a combination science project and commercial venture.

    The Japanese Epsilon/CLPS stuff just makes me think that JAXA is an excuse to maintain in being the capacity to design and build a high-end ICBM capability in well under a decade if it became politically necessary to do so -- just like those countries that used to maintain shipyards with preposterously large cranes and steel mills with silly-scale forges, just in case they got a hankering to start installing 16" gun turrets in battleships.

    (They didn't actually want to go to the expense of building a fleet of BBs and then manning and drilling them and scaring the crap out of the neighbours so they'd get into an arms race, but neither did they want to risk getting left behind.)

    201:

    Pointing a geiger counter at the nearest wall works fine.

    In principle, yes. In practice, you want to make damned sure that there's nothing artificial contaminating your nice source of random noise before it reaches your counter. Like electrical noise being picked up by the coax cables ...

    202:

    That is not the question I asked.

    203:

    Sending new OTPs encrypted via an old OTP exposes you to the same attack as reusing a pad for plaintext.

    Suppose I (securely) send you a OTP A, and then use it to encrypt further OTPs B and C. In other words, I find A XOR B and A XOR C and send them insecurely, so they're known to any attacker.

    Now I use B and C to encrypt and send plaintext messages X and Y. So now the attacker has seen B XOR X and C XOR Y. They can now compute (A XOR B) XOR (A XOR C) XOR (B XOR X) XOR (C XOR Y) = X XOR Y. It's easy to recover two low-entry plaintext messages given their XOR.

    204:

    So you're saying it's easy to recover X XOR Y if X and Y are encoded in, say, ASCII? Thanks, that's the missing piece of the puzzle.

    206:

    The secondary activation "problem" for fusion reactors is a red herring, basically, same as with fission reactors. For one thing the lithium breeder blanket will eat most of the free neutrons while converting their energy into heat to generate electricity with. The other thing is that the one scary activation product in a steel alloy is cobalt-60 and low or no-cobalt steels are a solved problem courtesy of the existing fission reactor manufacturing expertise developed over the past fifty years and more.

    There will be some activation of the vacuum vessel structure but that's dealt with by a variant of Safestor, letting the radioactivity decay to low levels over a few decades before the steel is repurposed. N-stamp hardware gets a raw deal in the reprocessing world compared to regular non-nuclear scrap which can actually be a lot more radioactive than material from a regulated nuclear site and still be acceptable for remelt and reuse. If I remember the numbers correctly, nuclear-sourced steel scrap in the US can't be more active than 500 Bq/kg whereas non-nuclear scrap steel can be 499,999 Bq/kg and still be acceptable for remelt and reuse. No real reason for this other than the fact that nuclear regulated materials are held to higher standards.

    207:

    In practice, you want to make damned sure that there's nothing artificial contaminating your nice source of random noise [for OTP] before it reaches your counter.

    Absolutely. Generating the OTP right is, not to be too obvious about it and shall we say, the key. You want to be as sure as possible that the physically random sources of bits are really random, and it wouldn't hurt to XOR in some hopefully cryptographically secure pseudorandom bits either.

    Fortunately for cryptographers and unfortunately for cryptanalysts, that XOR thing helps a bit. If you XOR several sequences of differently-derived bits together, the result won't be less random than the most random of them. (More or less, plus or minus not doing something stupid, but that's the idea.)

    208:

    I think we can safely say that anyone who can scratch-build high-efficiency fusion bombs doesn't qualify for the "third world" epithet

    I think there's value in declassifying various data about hydrogen. George Herbert pointed out that it takes great skill to get high-efficiency bombs using current open-source data. If they released more data would that get easier? Could it get so easy that high-efficiency fusion bombs were in reach for more nations?

    But: practical fusion? They don't inherently prevent someone capturing neutrons to breed 238U into 239Pu by blanketing the reactor, so they're not an intrinsic anti-proliferation tool. The reactor structure is subject to secondary activation, so they're not a magic bullet solution to the high level waste problem either.

    Since we don't know how to make a practical reactor, we don't know how the most practical reactor would work. The trouble with my above reasoning is that we keep learning more, and what we know looks bleak.

    Still, maybe some of what we think we know is in fact not so. I hate to bet my children's future on the chance that we're wrong, but we desperately need cheap energy and I don't see a lot of better bets....

    Here's one scenario that could make us wrong. I have no data that suggests it's true, I made it up out of nothing. But it could be true.

    Suppose that many decades ago when the US government was starting to realize that other nations were going to get the bomb, not just Russia, suppose that they started leaking and declassifying bomb data that had subtle errors. If you use that data in your bomb project you get fizzles and big delays. The nations that run successful bomb projects go back and do all the research from scratch, not trusting anything that's open source until they've tested it themselves.

    If that's true then George Herbert is wrong about the bright guys with open source, but there's no way to test it short of repeating experiments and looking for errors....

    Suppose some of that wrong data slipped into the assumptions the fusion reactor people have been going by. That would be bad.

    Is it worth setting the record straight? Is the possibility that we could get practical fusion reactors, worth the probability that other nations could get efficient nukes easier?

    209:

    lithium breeder blanket will eat most of the free neutrons while converting their energy into heat to generate electricity with.

    That's a good way to eat through large amounts of lithium pretty quickly, and it's not all that abundant. The neutron flux coming off a fusion reaction is huge.

    Decades ago Pons and Fleischman reported cold fusion in quantities sufficient to heat a glass of water a few degrees. One of the first indications that they were wrong is that the neutron flux inexplicably failed to kill everyone in the building.

    A power plant would have to up the fusion rate by six to ten orders of magnitude; we're talking about quite a few neutrons.

    210:

    scentofviolets: This is the sort of thing that bugs me when my students do it: you've just rephrased my question, but in the form of an answer. If you really know why, post the answer; if you don't, posting stuff like this doesn't help at all (and lowers my opinion of you to boot).

    "as I understand it" was in this case a shortcut for "I do not do crypto math myself, but have seen papers asserting..."

    vs say uranium enrichment ("I don't have a lab in my basement, but I can do the detail math/engineering/etc").

    It is relevant to the conversation that crypto geeks have asserted that any reuse of a OTP including encrypting another OTP that then gets used later exposes the original and second pads to decypering. I just can't do the math to explain why in detail.

    211:

    Jay: The tricky part is the gas chromatography to separate 235UF6 from 238UF6. That takes some serious space, since the two gasses diffuse at very nearly the same rate and relatively small amounts of 238-U absorb neutrons and hinder the chain reaction.

    Gas centrifuges can do the job with a lower footprint, but are much more complex to make and to operate.

    (Most of that was addressed more to other readers on the thread than to George, who I'm fairly certain has a solid background on nuclear physics but not necessarily on isotope separation).

    Nobody in their right might would use gas diffusion anymore; it's as hard to get working as centrifuges and harder than the South African vortex method. We just hadn't thought of Zippe-type centrifuges or the gas vortex in WW2, or we'd have forgone gas diffusion and calutrons entirely. The capital investment and energy inputs per kilogram-separative work (aka SWU) of enrichment "effort" are far lower for modern processes. Even the most primitive centrifuges.

    The "right solution" for a proliferation depends entirely on whether their feedstock is natural uranium or LEU reactor fuel. If they make or buy LEU then the enrichment SWU to take that 2.7-3% LEO to 80, 90, 94, 97% is far lower than the SWU to make the LEU in the first place. Cheap primitive centrifuges to do that are a mid-sized R&D program and capital investment and low energy input. Gas vortex is about the same. Calutrons / EMIS equipment are low R&D low capital high energy input. So, do you have time, money, electricity, talent in what relative balances going in to the program.

    The balances are different if you're starting from scratch, from nautral U rather than LEU. But it's a similar calculation.

    212:

    One of the first indications that they were wrong is that the neutron flux inexplicably failed to kill everyone in the building.

    Heh. Back at that time, my boss, enthused at the prospect of cold fusion, asked me to look into it. I did, and came up with the same answer: "Why aren't they dead?" It had to be cold, mostly aneutronic fusion, and that got into "You get one miracle, but not two" territory.

    213:

    Charlie: The Japanese Epsilon/CLPS stuff just makes me think that JAXA is an excuse to maintain in being the capacity to design and build a high-end ICBM capability in well under a decade if it became politically necessary to do so

    There's no "capacity to design" - slightly different interface structures and you can mount a bunch of reentry vehicles to CLPS, and it's an ICBM.

    M-V at least had no PBV. Sigh.

    214:

    The tricky part is the gas chromatography to separate 235UF6 from 238UF6. That takes some serious space, since the two gasses diffuse at very nearly the same rate and relatively small amounts of 238-U absorb neutrons and hinder the chain reaction. Gas centrifuges can do the job with a lower footprint, but are much more complex to make and to operate.

    Nobody in their right might would use gas diffusion anymore; it's as hard to get working as centrifuges and harder than the South African vortex method. We just hadn't thought of Zippe-type centrifuges or the gas vortex in WW2, or we'd have forgone gas diffusion and calutrons entirely.

    You beat me to it. Saying Gas Diffusion is simple compared to Centrifuges is like saying a battery only car is simple compared to a hybrid. Neither is very simple when your starting point of technology is a bicycle.

    A lot of nuclear tech used by the US in the 50s was achieved by throwing buckets of money at problems. Very large buckets.

    My father retired as the production manager of a GD plant. Simple and easy to build it would not be. And the power required to run it is serious.

    As to calutrons, did they every get any useful amounts of U235 out of them? I thought they were just one of those "seemed like a good idea at the time..." items.

    215:

    David L: As to calutrons, did they every get any useful amounts of U235 out of them? I thought they were just one of those "seemed like a good idea at the time..." items.

    I can't find my copy of John Coster-Mullen's book, where he ferretted out the specific quantities and sources for the Little Boy HEU, but generically it's widely reported that the Y-12 Beta Calutrons were the final enrichment step of "nearly all" of the HEU used in Little Boy.

    216:

    OK I'll bite, since this thread seems to be circling down into one of the usual attractors.... How does one concentrate U these days? I assume that making Pu is done by irradiation.

    Please - just point me at some (relatively) simple sources / explanations - don't go into a long screed here!

    217:

    In addition to the secrecy/deniability issue that Charlie points out, there's an awful lot of crypto deployed now that's, AFAIK (disclaimer: I am not a mathematician), resistant to fast factorization but weak against predictable RNGs. DSA signatures, OpenPGP messages encrypted to ElGamal keys, and EDH SSL ciphers come to mind... Not to mention the possibility that some target might wind up picking up a trojaned RNG and using it to generate one time pads.

    218:

    Greg:

    Really brief answer? Essentially all of it worldwide is large cascades of Zippe type centrifuges, combining spin with differential heating of the gas.

    Eurenco, the European enrichment conglomerate, has some good public general FAQ documents. Or did.

    219:

    OTP's have extremely high entropies -- the highest that is feasibly possible in fact. So I'm wondering what the flaw in this reasoning is, i.e., why you can't send more usable OTP's via the original OTP.

    If you start with a truly random OTP, then theoretically there is no possible way to decrypt the message unless the decrypter has a copy of the OTP. If he gets a copy of the decrypted message then he can find the OTP and it does him no good whatsoever.

    As I understand it (I may be wrong) for this to work perfectly you need one bit of OTP for every bit of message. If you somehow use one bit of OTP for more than one bit of message then there's something for decryption to work on.

    So if you use a OTP to transmit a second OTP, at one bit per bit you don't gain anything. If you somehow use one random bit to encrypt more than one random bit then you no longer have a OTP, you have two sequences you can use for cryptography which are no longer theoretically secure.

    I don't claim that it's easy or likely to extract messages that were created by an almost-OTP. Probably it's very very hard. But it's an important theoretical distinction.

    Like, what's the chance a woman is carrying an STD? "I don't think so" versus "I'm a virgin".

    220:

    A single 40 Kt bomb can be a (lightly boosted) compact gun-type Uranium bomb ....

    That does NOT need a complex materials program to build. The entire design, engineering, and test program short of a live HEU yield shot can be done in a garage...

    That's something else I'd appreciate hearing about from an expert.

    The impression I have is that everybody who builds bombs does it basicly the way we did. First they enrich U235, and maybe they test a HEU bomb. Then they build reactors to make plutonium, and learn how to build plutonium bombs.

    And I have the impression that the latter takes more expertise, but it's far far cheaper once the fixed costs are handled. You use a lot of energy to separate U235, but the reactor that makes plutonium can give you energy back. Once you know how, you can make a lot of plutonium bombs as cheap as one HEU bomb.

    They could skip the HEU step entirely and put all their LEU into reactors to make plutonium, but for external reasons they usually want a bomb quick. And they enrich uranium because that's the only way they can get a military reactor to extract plutonium from -- civilian reactors are watched.

    Is there any way to skip the U235 step entirely? Given a cheap neutron source, couldn't you start with DU and make plutonium that could be chemically extracted, and then build a reactor and Bob's your uncle? Similarly with thorium.

    You strike a match to light the tinder which lights the kindling which lights the twigs....

    221:

    Like, what's the chance a woman is carrying an STD? "I don't think so" versus "I'm a virgin".

    A rather fragile simile, I'm afraid. Check out maternal transmission of various STDs. (And yes, birth is an intrinsic part of sex, if a rather delayed part.)

    222:

    J Thomas in part wrote: The impression I have is that everybody who builds bombs does it basicly the way we did. First they enrich U235, and maybe they test a HEU bomb. Then they build reactors to make plutonium, and learn how to build plutonium bombs.

    The US did both at once because we didn't know going in which would work.

    US, USSR, UK, CN, FR, IN, IS, NK programs fired Pu bombs first and for the most part exclusively (US, UK, CN, known exceptions, NK likely).

    PK and SA were HEU first, along with 1990 Iraqi aborted program and 2003 Iran paused program first design.

    And I have the impression that the latter takes more expertise, but it's far far cheaper once the fixed costs are handled. You use a lot of energy to separate U235, but the reactor that makes plutonium can give you energy back. Once you know how, you can make a lot of plutonium bombs as cheap as one HEU bomb.

    You're neglecting the Pu separation (massive nasty chemical plant operating with vicious acids and related nasties, completely by remote control as everything involved is lethally radioactive...).

    A complete accounting of costs is hard; both paths are very expensive. $100,000/lb for weapons grade Pu, roughly, and some large fraction of that for WG HEU.

    You need 'roughly' four times more HEU at equivalent tech levels for a basic bomb. So Pu weapons at advanced tech can get smaller, excluding gun type weapons where HEU quantities go up another factor of four-ish, but overall weapon sizes can go down again...

    If you talk to five experts you get eight opinions on most efficient ptogram path, none of which they will talk about in detail in public on how to optimise... including this one.

    223:

    J Thomas: Like, what's the chance a woman is carrying an STD? "I don't think so" versus "I'm a virgin".

    This is your yellow card -- cause: using a really offensively sexist metaphor. (Compounding factor: you didn't need to.) Please do not do it again or you may be unpublished and/or banned.

    224:

    "This is your yellow card"

    I apologize. It seemed a vivid metaphor which was exactly homologous, and I didn't think about the problem.

    I will make my best effort not to do anything similar again.

    225:

    Thanks. I'm trying to keep these discussions as inclusive and welcoming as possible (while retaining the ability to engage in open debate); that kind of thing plays into a rather nasty set of cultural tropes that create an unpleasant environment for women.

    226:

    The discussion of various isotope enrichment methods is interesting, but for practical purposes the conclusions seem to be the same. Making or getting adequate amounts of adequately-enriched fissionable material is 96% of the challenge of building a small nuke; fashioning the material into a suitable form is pretty challenging for a garage sized operation (even if they're OK with being expendable). The trigger is its own little challenge. Once you have the right nuclear materials in the right shape, you're 99% of the way there.

    227:

    Returning to Internet security etc. ... just how big a mess is today's EU court ruling going to create for people/users in general, not just Google? Who's going to be the most vulnerable? How will it impact online crime?

    "Europe’s highest court on Tuesday stunned the U.S. tech industry by recognizing an expansive right to privacy that allows citizens to demand that Google delete links to embarrassing personal information — even if it’s true.

    The ruling has potentially wide-ranging consequences for an industry that reaps billions of dollars in profit by collecting, sorting and redistributing data touching on the lives of people worldwide. That includes more than 500 million people in the European Union who now could unleash a flood of deletion requests that Google would have little choice but to fulfill, no matter how cumbersome."

    228:

    The Google ruling plays complexly into all sorts of things including the US NSA overreach.

    These things are going to start tweaking users' experiences in the not too distant future, and it will be noticed.

    229:

    That includes more than 500 million people in the European Union who now could unleash a flood of deletion requests that Google would have little choice but to fulfill, no matter how cumbersome.

    I'm pretty sure that's just not going to happen. Google might make a token, spend-six-weeks-on-hold sort of effort in that direction, but I just can't imagine them dedicating the resources to fully comply with that law. They'd probably rather leave Europe, if those were the only two choices.

    230:

    The tech industry must be wilfully blind not to have seen this coming.

    The US Constitution doesn't provide a right to privacy -- there's a rather tenuous inferred right -- but the European Convention on Human Rights is as solid on a formal right to privacy and family life as it is on the right to free speech. Both rights are to some extent a reaction to the excesses of Mean Mr. Moustache and various other dictators during the pre-EU 20th century, just as much of the US constitution is a response to the impact of the 18th century Zero'th World War on the North American Colonies; just as the US constitution predates things like national-level armies and so has oddities like the 2nd Amendment, so does the ECHR predate the internet and make pesky annoying assumptions about the cost of information-copying being non-zero.

    Getting rid of the European privacy regs will be about as easy for the US tech industry as getting rid of the US gun lobby, and for much the same reason (privacy is seen in the EU as a self-defense issue, much as gun ownership is in the USA).

    In fact, I'd argue that the right to privacy is much more important than the right to own firearms, at least in a century that's going to be dominated by urbanization, ubiquitous computing, cheap-as-dirt sousveillance technology, and killer drones.

    There's probably an essay in this, but I'm too tired to write a new High Concept blog entry right now.

    231:

    In the first instance, enforcement will require a court order.

    But if Google flout it, they're going to be in a world of hurt. "They'd probably rather leave Europe" is about as credible a response as "they'd probably leave the USA" -- it's a similar-sized market.

    232:

    That Italy can do something doesn't mean that it is easy to do. According of the nominal GDP tables on wikipedia Italy has the world's ninth largest economy, below Russia and above India, with about 3% of world GDP. It's a member of the G7.

    233:

    May be a business opportunity in this ...

    An app that searches for what-should-have-been-deleted-personal-info and upon finding it, automatically files a suit on that site/server

    234:

    This is about company that refuses to admit it's an ad agency? Don Draper in chinos?

    I actually look forward to the EU hitting GG with sticks. In roughly the same way MS had to listen to those sticks...

    235:

    By "leaving Europe" I don't mean not advertising to Europeans, I mean not keeping corporate assets where European courts can seize them. Keeping the servers in Switzerland or Norway ought to do the trick.

    American tech firms are likely to consider the European privacy rulings about as sensible as Saudi porn laws, and respond in much the same manner- by avoiding the jurisdiction.

    236:

    OK, now I'm imagining a cyberpunk Casablanca.

    "Of all the servers in all the world, she had to log in to mine."

    237:

    Er..No The European Convention on Human Rights =/= European Union, it is a seperate organisation to which Norway and I presume Switzerland are bound.This means a privicy declaration by The European Court of Human Rights is binding on them.If this is in fact an EU ruling , Norway is a member of the European Economic Area and may be bound by the decision, Switzerland is land locked by the EU, is Google worth a trade war with the EU?

    238:

    Ah, US corporations trying to ignore "European" regulations. IIRC, one of the later versions of the earlier series (if you see what I mean - after 8086, but not much) of Intel chips had a divide-by-zero error encapsulated, if one did fairly simple calulations.
    This was discovered & Intel responded by saying, effectively: "Who cares - these are not for math(ematics) they are for PC use" Until UK's consumer legislation cut in ... as the goods were "Not of merchantable quality" Intel still tried to laugh it off ( fine maximum of about £500, IIRC ) UNTIL they realised that this could be replicated in every single local authority across the UK - plus the bad publicty, of course.

    239:

    The European Convention on Human Rights =/= European Union, it is a seperate organisation to which Norway and I presume Switzerland are bound.

    Yes, Switzerland joined in 1974. 48 nations have signed it, the easternmost of them is Russia.

    I expect the Russian government would ignore it, in exchange for certain considerations that Google could provide for them.

    240:

    Ah, US corporations trying to ignore "European" regulations.

    I refer everyone to the earlier comment of mine (last week or 2) about how "blaming Canada and Europe for the failure of Prohibition" was effectively complaining that no-one else felt the need to enforce USian domestic laws on their territory.

    241:

    By "leaving Europe" I don't mean not advertising to Europeans, I mean not keeping corporate assets where European courts can seize them. Keeping the servers in Switzerland or Norway ought to do the trick.

    Google is an advertising company; that's where its revenue stream comes from, notwithstanding the various "moon shot" programs such as self-driving cars (which promise to deliver a different, equally huge revenue stream at some future point if they pay off, allowing G to pivot like a start-up if necessary).

    Ad companies have to sell services to customers. If those customers are in the EU, they're governed by EU regulations. If the services Google is trying to sell include customized search and tracking of de-anonymized persons, then it could be argued in a court that companies paying for these personalized ads are party to any crimes committed in the process of gathering that data. So Google's EU based customers could be exposed to lawsuits ...

    There are a bunch of other angles, but I'm tired and I've got a book to write. Ultimate point: Google might be able to reduce its attack surface for litigation in the EU, but by so doing it would also reduce its ability to do business with EU customers. Which would ultimately cost it a lot more money than abiding by the law. Unless you're suggesting that Google, like the Mafia, is a business that operates without regard for regulations because in its sector it's cheaper to handle enforcement itself?

    242:

    The thing about hydrogen fusion is that there are something like four or five possible reactions. Deuterium-tritium is by far the easiest to trigger, and it has the highest energy yield. However, most of the energy comes out in gamma rays (very-high-energy X-rays), which are useless for power generation.

    This makes deuterium-tritium fusion primarily useful only for weapons, not power. As such, this is data that you really don't want the Bad Guys to get.

    About ten years ago, during my second student period (refresher work while jobhunting), I attended a physics club talk. The professor was doing fusion research. On deuterium-tritium fusion. After the formal talk was over, I asked him why he was looking at D-T, because most of the energy output was in gammas. I did not get an answer. I concluded, to myself, that he was really doing weapons research.

    That kind of thing is actually quite common, and the results openly published. Read the Afterword to Tom Clancy's "The Sum Of All Fears", where Clancy describes how to find some of it.

    243:

    Google isn't really an advertising company as such. Advertising works by creating or identifying an insecurity, and then presenting a good or service as something which will reduce that insecurity.

    Google works by reducing internet-related insecurity through ostensibly free services for individuals and paying for those by charging corporates -- which have different insecurities -- for some of the insecurity-reduction services. It's just that the primary corporate insecurity is "no one will buy our stuff".

    "Get more people using the internet" as a business model, in other words.

    This ruling, if it extends to breaking the reduces-insecurity-of-individuals model, which it looks like it entirely does -- if it's easy enough to comply to make the European courts happy, whole swathes of "I know nothing about this person" insecurity reduction break, because now Google won't be able to tell you anything the individual finds damaging -- and that could pretty readily kill them. Which is, from the point of view of the established political process everywhere, a good and excellent thing.

    244:

    if it's easy enough to comply to make the European courts happy, whole swathes of "I know nothing about this person" insecurity reduction break, because now Google won't be able to tell you anything the individual finds damaging

    Au contraire: knowing that stuff you want to disown that you said decades ago can be left in the silence of the grave is going to have the opposite effect for those who use it: it's a clear insecurity-reduction measure. We don't like being in a panopticon. (Being the observer in the turret is all very well, but having to live in those cells really sucks.)

    245:

    Why are high-energy gammas useless for generating power? Once they've been "spent" in an absorbing material they're heat and thus steam and therefore electricity. A good proportion of fission energy is produced by the emission of gammas, one-tenth the energy of D-T fusion gammas but we've got a demonstrable track record of gigawatt-level emissions over many decades in hundreds of existing nuclear reactors to show that properly-designed hardware can easily withstand that sort of bombardment while harvesting the energy as heat to drive turbines.

    246:

    Unless you're suggesting that Google, like the Mafia, is a business that operates without regard for regulations because in its sector it's cheaper to handle enforcement itself?

    Sort of like the issues of copyright and YouTube?

    247:

    I don't think that's quite true, about "the US constitution predates things like national-level armies". Although they didn't use the Age of Revolutions, Nation in Arms, mass conscription model, the northern powers - Denmark, Sweden, and (in response to those) Russia - had already developed national level armies that we would recognise. Arguably, the Ottomans and the Mamelukes had done so earlier, though we would most likely disqualify those as not being nations (even though they operated on that scale), and of course a large part of the driver for early Chinese Legalism was to enable warring states to do a Prussia, so to speak, and that idea was retained and sometimes even expressed after China was unified.

    248:

    Google's whole business is about automation. They don't do things manually or have an editorial staff checking websites. If they're not allowed to serve websites that mention people who don't want to be mentioned, and they don't have an algorithm to figure out who wants to be mentioned where*, then they're not allowed to serve websites that mention people at all. And that's most websites.

    If jurisdiction shopping won't solve the problem, ceasing business in Europe starts to look reasonable.

    *For extra fun, try devising an algorithm to tell the difference between real people, whose privacy must be respected, and fictional people, who do not have privacy rights. Remember that some real people have the same name as fictional people; there are real humans named "Ronald McDonald" and "Homer Simpson" whose privacy must be respected.

    As far as the legalities go, remember that Google owns YouTube, which contains a significant proportion of the copyright violations on the planet. An unflinching regard for the spirit of the law isn't really their approach to business.

    249:

    It is arguable that all state armies post Westphalia were,in Europe, National Armies. 18th century armies were small, expensive and well trained, non-states did not have the resorces to field good armies. The French revolution/Napoleonic armies relied on mass forces backed by conscription to overwhelm the existing armies. Napoleonic forces were poorly trained, badly reasorced and cheap mass forces.(Being very simplistic!) Incidentally does the description of 18th century armies remind anyone of any thing?

    250:

    This makes deuterium-tritium fusion primarily useful only for weapons, not power.

    I believe that statement belongs to the epistemological category "Not entirely correct." (Leaving aside the question of whether fusion will ever be a useful terrestrial power source.)

    https://www.iter.org/sci/fusionfuels

    "Although different isotopes of light elements can be paired to achieve fusion, the deuterium-tritium (D-T) reaction has been identified as the most efficient for fusion devices. ITER and the future demonstration power plant DEMO will use this combination of elements to fuel the fusion reaction."

    251:

    It should also be noted that one reason for State armies post westphalia was the (unspoken) belief of never again after the 30 years war where uncontrolled religious forces cause massive destruction. The next Europen religious war was nearly 300 yearslater(1941-Eastern Front)

    252:

    Deuterium-tritium is by far the easiest to trigger, and it has the highest energy yield. However, most of the energy comes out in gamma rays (very-high-energy X-rays), which are useless for power generation.

    This makes deuterium-tritium fusion primarily useful only for weapons, not power.

    If it's the easiest to trigger, then it kind of makes sense to do that one first. The others are harder to do, so use what you learn from that for the next one.

    But you have a point, this is the data the bombmakers most want.

    253:

    The thing about a panopticon is that it's relatively flat; everybody's in there.

    What this ruling appears to create is an administered panopticon; the costs and effort involved to find all the links you want suppressed aren't trivial. (It's not obvious that Google will be able to respond to suppression requests in a useful way at all, it's not clear a human being will be able to distinguish "which John Smith is this being talked about?" reliably.) So it becomes two or three tier, especially since facts are no defence and the right of privacy has very squishy edges.

    So I'm expecting established powerful people will have nothing bad said about them on the internet, that there will be a middle layer of people who it is difficult to disparage, and the rest of us. This really doesn't benefit the general polis since those established powerful people are right now vulnerable only to amateur journalism, which this ruling looks more or less designed to destroy.

    254:

    Let's turn the question around. What do you think Google should do when somebody complains that Google is invading their privacy? Before answering, remember that:

    1) The solution has to be algorithmic, because editing umpteen million webpages, many of which update frequently, with human labor is not practical.

    2) There are many humans named Bob Howard, and several of them are likely to make this complaint.

    3) As long as we're at it, you probably aren't the only Charles Stross in the world.

    255:

    Google's whole business is about automation. They don't do things manually or have an editorial staff checking websites. If they're not allowed to serve websites that mention people who don't want to be mentioned ... then they're not allowed to serve websites that mention people at all. And that's most websites.

    Well boo-hoo. Sucks to be them.

    Just because a business model is possible it does not follow that it is legal (or, for that matter, desirable).

    256:

    I was hoping someone would pick up on this...

    Until everyone gets one and only one internet user id for their entire lifetime, Google or anyone else cannot accomplish this, that is, zero-error suppression/deletion of past history.

    Every time a universal ID# has been suggested where I live, people go insane: No way! Absolutely not! You're reducing people into numbers, etc.

    257:

    In fact, I'd argue that the right to privacy is much more important than the right to own firearms, at least in a century that's going to be dominated by urbanization, ubiquitous computing, cheap-as-dirt sousveillance technology, and killer drones.

    I understand, at some kind of intellectual but not emotional level, how Europe views this. But I get the feeling that the court thinks Google operates the way Yohoo tried to "back in the day". Yohoo's early business model had people building their indexes. If you weren't a "big deal" and not in their index you could apply to be there. And I think apply to which categories you wanted to appear. This all fell apart as the Internet grew. Some of us saw this coming as soon as they started but ...

    Anyway, Google's business model falls apart when they have to start manually adjusting their index. And maybe this was the point. Maybe many Europeans, including the court, feels that any index like Google's should be manually curated.

    As to how to do this. That really gets tough. If you just use my first and last name there are 1000s of me in the US alone. Maybe 10,000s. I went to high school with one. Adding in my middle name still gives you at least a few 100. How does someone building an index figure out which is "me".

    And what I find strange, and a bit suspicious, is the original source of the information doesn't have to take it down, just the indexer such as Google. Does this mean the court feels no one should be indexing the Internet? Or did I get this detail wrong.

    258:

    All those saying that Google can't algorithmically cope with this, or will have a meltdown in one form or another, I advance a counter-argument:

    IF $query is a name: IF ($webpage in $results is a newspaper article) AND (is over X years old) THEN remove from $results.

    This will capture 90-99% of all possible infringements, at which point a manual process should be possible (if necessary). This is less than ideal, but if someone as semi-competent as I am can solve this in 5 minutes, what can Google come up with?

    259:

    What's a name? Are you going to give it a list of human names? Expect complaints from both false positives and false negatives.

    How does a web crawler know a "newspaper article" when it meets one? I don't think there's a standard HTML code for that. You could get better than 50% accuracy by tagging anything on a news organization's website as probably a news article, but that just moves around the identification problem. Will there be a master list of newspapers and their websites?

    Note also that many web pages aren't dated in ways that a computer program will recognize.

    260:

    Seriously? Are you a programmer who's worked on complicated projects?

    The way a good programmer knows it's going to be a bad day/week/month is when someone in authority starts off with "All you have to do is ....."

    Do you have any idea of how many name collisions there are in the US. Especially with Hispanic names. (Not racist. A product I worked on in the 80s ran into this where we had people scouring phone books in cities like Miami to make sure we hadn't made some bad assumptions on how many people had the "same" name.)

    I've got to feel there are similar situations with names in Europe.

    Now let us also start talking about Wikipedia. And blogs. And ...

    261:

    As I understand it the calutrons were energy intensive mostly because of their big electromagnets, though the ion sources also required a fair bit of power.

    I assumed that during the Manhattan Project they used electromagnets instead of permanent magnets for calutrons because it was a lot easier to achieve the desired field strength, homogeneity, and stability with resistive electromagnets, rather like how older low field NMR spectrometers used good-sized resistive electromagnets. But of course people have now demonstrated that you can build a good low-field NMR device with modern permanent magnets at much lower capital and operating costs (see "benchtop NMR," "permanent magnet NMR"). The two big keys seem to be higher field strength from samarium cobalt magnets and good temperature control to prevent field strength variation.

    Given the advances in electromagnetic modeling, permanent magnet materials, and closed loop control systems since 1942, I wonder if it would be practical to build a permanent magnet calutron today, and if so how its costs per separative work unit might compare with other technologies. Could a calutron also be used to upgrade plutonium from civil waste fuel into weapons-grade Pu-239? There's a smaller mass difference between Pu-239 and 240 than between U-235 and 238, but the total enrichment needed is much lower than going from natural uranium to HEU. The radiation body burden is going to be a lot higher for whoever's assigned to scrape out the calutrons in the plutonium version, but maybe they will have good protective suits. Or not care.

    I also wonder if pursuing a modernized calutron, which requires more research trailblazing but perhaps hits fewer materials limits or technology controls than e.g. centrifuges, would be attractive to a would-be proliferator with some resources but not great international ties. Say Myanmar or Uzbekistan.

    262:

    Legal?

    Business Models of Law just sit on top of all other models of LAW.

    Don’t know about "legal” in this context, or in any context that implies International Consensus in Business or any other sort of LAW.

    Given the difficulty that is implicit in integrating, International Human Rights Law with, say, The Death Penalty in Texas of the US of A and the legal status of Women in Somalia ..

    " .. A Sudanese court has sentenced a woman to hang for apostasy - the abandonment of her religious faith - after she married a Christian man.

    Amnesty International condemned the sentence, handed down by a judge in Khartoum, as "appalling and abhorrent"...

    " Legal media report the sentence on the woman, who is pregnant, would not be carried out for two years after she had given birth.

    Sudan has a majority Muslim population, which is governed by Islamic law.

    "We gave you three days to recant but you insist on not returning to Islam. I sentence you to be hanged to death," the judge told the woman, AFP reports."

    Business Law should be relatively simple compared with that...until you start trying to define 'Business' and ‘Law’

    News media often quote desperate appeals for intervention in the latest murderous crisis by “The international community "...from Wikipedia ...

    “The international community is a phrase used in international relations to refer to a broad group of peoples and governments of the world. The term is typically used to imply the existence of a common point of view, towards such matters as specific issues of human rights. Activists, politicians and commentators often use the term in calling for action to be taken, e.g. against what in their opinion is political repression in a target country.

    The term is commonly used to imply unanimous international support for a point of view on a disputed issue, e.g. to enhance the credibility of a majority vote in the United Nations General Assembly. Used in this way, it is a weasel phrase.

    Noam Chomsky has noted the use the term to refer to the United States and its client states and allies in the media of those states.[1][2][3] These states are the most common targets of condemnation by the actual international community."

    The reality is that such appeals to the international community have about as much chance of being fulfilled as have desperate prayers to the Deity of Your Choice..Even if you include D of Y C including appeals to International Socialism or Democracy.

    Just lately active low key military Intervention in the, " #Bring Back Our Girls " movement by the "international community” seems to be limited to those political blocks who have an interest in exploiting African recourses. Send in SAS advisors - of whatever nationalities equivalents - appears to have become the standard knee jerk response and the equivalent to the good old fashioned 'Send in a Gun Boat '.

    Public relations rules and the powers that be have been ramping up the effective power of Special Forces since the failures of Shock and Awe tactics in our most recent of wars.

    The Trouble with such S & A tactics is that the entire civilised world is very hard to shock since the Second World War...Extermination Camps on an Industrial Scale and Atomic Weapons don't really leave much room for shock do they?

    I suspect that, thanks to universal literacy and its daughter the social media, we are now un-shock able.

    Oh we may well pose as being shocked by the latest atrocity in the land of far, far away, or the latest revelations about the latest corrupt business model but I do wonder whether we are capable of genuine shock any more.

    The Snowden revelations haven’t really shocked or surprised us have they? We are a little surprised that Snowden should have made the revelations, and that our very expensive intelligence systems should be vulnerable to the actions of a very minor player in the intelligence community, but the revelations themselves? Na...Not very interesting...move along to the next event.

    I almost wish that the whole Snowden Event would turn out to be a massive disinformation ploy that covers something genuinely shocking.

    Where are the Many Angled Ones when you need them?

    263:

    As an exercise, I tried Googling myself using this name which should be fairly unique. Even on page 1, I got an obsolescent hit (site I've not visited in years and have no intention of visiting again ever), 3 I don't recognise, and one that's a transliteration of a dog-walking service. I got a reported "about 10_400 hits".

    Using my real name, I got another 9 million plus, including 17 from LinkedIn, a different hit reporting 25 from LinkedIn, 6 sample images (all of other people), 107 from 192.com (I don't have a home landline), an inaccurate claim that I'm on Facebook, 2 for a USian MD, and a claim that I own $real_name.com. Most of these are also wrong (but I may have an entry on LinkedIn).

    Does anyone still think that this is easy, or even possible, for search engines to sort out algorithmically (or even at all)?

    Perhaps some of the others who use a single pseudoneum would try the same exercise and make a similar report?

    264:

    Just because a business model is possible it does not follow that it is legal (or, for that matter, desirable).

    Thing is, the law as interpreted breaks search. It sets a standard you couldn't meet short of strong AI.

    The internet is flat useless without search; the fallback is curated walled gardens run by wealth agendas.

    I have a lot of trouble seeing that as a net win. (If privacy is a concern, you have to go after collect-and-retain, not find-what-exists. NO ONE is willing to even start to go after collect-and-retain. Which suggests something about how much it's actually wanted.)

    265:

    "Well boo-hoo. Sucks to be them.

    Just because a business model is possible it does not follow that it is legal (or, for that matter, desirable)."

    Seems the bigger problem is that "right to be forgotten". I'm really not clear on how that is supposed to work.

    Is it really illegal to tell someone that, for example, Person X was sued for something ten years ago, and here's a link to a newspaper article about it? Why should providing that (true and public) info be prohibited - even if Person X was acquitted, won their case, whatever?

    Keeping juvenile records under seal, or expunging an overturned conviction from the public records, sure, that's one thing. But prohibiting reference to news reports from the past ... huh?

    As one example: Berlusconi. Surely we don't want any of the years of highly public legal excitement there to be censored, in the interests of Silvio's privacy?

    There should be privacy protections, sure, but this one sounds quite mad. Am I missing something here?

    266:

    Perhaps some of the others who use a single pseudoneum would try the same exercise and make a similar report?

    Searching for my legal name is an exercise in pointlessness. This wasn't necessarily the case in the '90s, when usenet news ruled and the net was much smaller; then I was one of only three or four notable people with my name. Now? Nope.

    So I tried googling a long used pseudonym in connection with a niche fandom. I got lots of entries for a hotel chain in another country, pictures of airplanes, references to a miniatures wargame I don't play, articles about a traffic accident, and buried in all this was a website which hadn't been updated since 2001; upon that site, with digging, I found a picture of myself from 1997.

    I don't feel my privacy is desperately endangered.

    267:

    That sounds like a variation on the same sorts of issues that I hit, and my real name isn't that commonplace.

    268:

    Just because a business model is possible it does not follow that it is legal

    Fair enough, but if the EU outlaws Google's business model, then Google can't do business in Europe. Nor can Bing or Baidu or Ask.com for similar reasons. Either the law gets worked around, or search engines get sued into oblivion, or search engines systematically block EU IP addresses (and boo hoo for every European web company that depends on them).

    I'm not suggesting that the EU lacks the authority to criminalize the internet within its borders, or to do something tantamount (like this). I'm just finding it odd that anyone would consider it a good idea.

    269:

    So I'm expecting established powerful people will have nothing bad said about them on the internet, Precisely. There's (another) stink today about how the Chilcot Report - Google for it - is delayed YET AGAIN - "and will we ever see it?" My opinion, is, that unless someone leaks a copy - NO. Why not? Because it will indight the vile christian, Blair as a War Criminal, that's why. And we can't have that can we?

    270:

    Given what the Mahdi's present day followers are doing ..... Wll, that sort of thing makes one wonder if Kitchener, even given the criticisms of brutality levelled against him by Churchill (who was there), wasn't at least partially correct sfter all. Yeah I know - two wrongs don't ...... And we are supposed to be the "good guys" - also see my immediatly preceding comment about Tony B liar.

    271:

    Oops Addendum: I suspect that, thanks to universal literacy and its daughter the social media, we are now un-shock able. Wrong We are supposed to have learnt, we are supposed to have "Improved". Then Putin starts using the Sudentenland playbook, various Dark-ages & Bronze-age primitve religions come crawling out of the dustbin to disturb our rationality & peace etc ad nauseam. Yes, it is profoundly shocking. Now, what does one actually do about ti?

    272:

    In the long run, being able to search for stuff is too valuable. We won't be willing to lose that for privacy.

    And so anything that people know you want to keep secret becomes something you can be blackmailed about. You do better not to let people see that you care about those secrets, let them loose before they turn dangerous.

    I'm pretty sure it will turn out that way. Unless you can prevent search engines from existing, they will be used to organize information. If you make it illegal to use that information, they will be used only secretly by people who think they won't be caught. And by governments.

    People with normal incomes are better off when that information is free or cheap, because then they are at less of a disadvantage compared to big entities and also to people who can use the big entities' google searches for their own ends.

    273:

    I'd like to suggest that suppressing search results isn't the only way that Google (or any other search operation) could comply with the requirements of "right to be forgotten" - since they're the ones with the massive search and indexing capability they're well placed to act as an intermediary and have erroneous/obsolete content removed by the publisher...

    274:

    We know Google can do this - they already do it for DMCA requests.

    275:
    What's a name? Are you going to give it a list of human names? Expect complaints from both false positives and false negatives.

    Amend that to "list of names we've had requests to remove results for," then. Google has previously shown it thinks it knows what names are, but demonstrated it was wrong.

    How does a web crawler know a "newspaper article" when it meets one? I don't think there's a standard HTML code for that. You could get better than 50% accuracy by tagging anything on a news organization's website as probably a news article, but that just moves around the identification problem. Will there be a master list of newspapers and their websites?

    Things like microformats help. Google has a bunch of heuristics for what a news article is already.

    Note also that many web pages aren't dated in ways that a computer program will recognize.
    "when did we first spider it? That's its start date."

    You're critiquing a reductio ad absurdum for those who said complying is impossible, not a serious suggestion.

    276:

    Exactly how are name collisions supposed to be relevant?

    277:

    "Hi Google. My name is Barack Obama. No, not that one. Yes, I want you to remove all references to me."

    I'd have thought that would be relevant, that it would have some bearing on both false positives and false negatives. No?

    278:

    We know Google can do this - they already do it for DMCA requests.

    It's not the same. AIUI the court said Google has to remove links to originals. But the poster of the original doesn't have to remove true information.

    Or did I miss something?

    279:

    Given that, what is the proposal for making Google link's to, say the "Anytown Gazette" article on "how my house burnt down" stay removed given that the AG's original article may remain in perpetuity?

    280:

    "Sure, just send us the offending links. All of them." It's data protection law. You can only have it removed when it's inaccurate or out of date, so you have to tell them the exact problematic page.

    281:

    You've never gotten a "chillingeffects.org" notice in your results on a Google search?

    Google can and do remove links to copyright infringing material from their results. They post a link to the DMCA request instead. Here's an example.

    282:

    Well, I'd never seen one of those before, but I don't usually make searches for music or video content.

    283:

    "Perhaps some of the others who use a single pseudoneum would try the same exercise and make a similar report?"

    I have a fairly stable 'nym and ego-surfing it gets pointers to various aspects of my online existence for the first page and a half or so of google results.

    It's much the same for my actual, for-realsies, name - but I have an unusual family name paired with a fairly unusual first name (less so these days, but it was unusual for someone born pre-Star Wars) so I am very much an outlier when it comes to namespace collisions.

    Regards Luke

    284:

    We know Google can do this - they already do it for DMCA requests.

    From Google's perspective, there's a huge difference between "Don't include the website at this specific address" and "There are 500,000 humans named Wang Chen. Don't include derogatory references to this specific set of 200,000 of them. Don't delete references to any of the other 300,000."

    European names are a bit easier; I intentionally maximized the problem for clarity.

    285:

    What's the difference between "don't include the website at this specific address" and "don't include the website at this specific address for this specific search term"? The ruling covers removal of specific information, not general Bad Stuff (probably - the breadth of the "right to be forgotten" stuff is untested as yet); demanding the complainer send on a list of URLs would be entirely in keeping.

    286:
    That is, I hope nobody writes delibrately self-modifying code anywhere. In times of the 8-bit machines of 35 years ago, yes, it was sometimes necessary, but not today. (Please correct me if I'm wrong and it has some real world use!)

    I work on something that does this routinely to running code in the operating system kernel.

    (Why? Tracing. If you want to know what something is doing, the only way to do it is to change what it is doing so that what it is doing includes keeping track of what it is doing, and then carefully doing what it would have done anyway. Oh, the modification has to bear in mind that there are multiple CPUs in a system, and that those other CPUs might be executing the code being modified, and that even if you change it there are caching layers and performance optimizations and outright CPU bugs in there that might lead to your changes getting there in a different order than you made them. It is a nest of snakes.)

    287:
    To those of you who think that C is hideously unsafe: you're doing it wrong. I have one word for you: valgrind. Combine this great tool with a proper test suite and C is as safe as any other language.

    I swear by Valgrind. It is a wonderful wizardly work. But it can't spot everything. Not only can it not spot overruns and underruns occurring on the stack, it can't spot bugs that don't occur.

    The one that bit me last week relates to realloc(). Stashing away a pointer to something in a memory region that will later be realloc()ed is dangerous, but it's only an obvious bug if your realloc() happens to be an expansion, not a contraction, and also decides that it can't just expand the region under consideration but needs to move it somewhere else, and if you then proceed to dereference the region under consideration and use the result in such a way that it modifies control flow (because compilers routinely generate code that e.g. accesses data off the end of allocated regions, then ignores the uninitialized bits of the result, for performance reasons).

    Then, and only then, will valgrind notice that something is wrong. So that bug can persist for a long long time, more than long enough to get into the wild with a potential security hole in it.

    288:

    DCMA and other copyright takedowns go after the content. The links become redirects because the original's not there any more and it's by specific address.

    This doesn't work very well.

    A "right to be forgotten" is either "no search will ever return this information about this person" -- maybe possible with strong AI, not something people could do reliably because "this information" and "this person" both have dire definitional issues -- or "no search will ever turn this specified link" -- which is vaguely doable in the same sense that the copyright takedowns are doable but which doesn't work and which certainly doesn't erase the information from human knowledge. It might make things slightly harder to find, that's what it does for music links, after all.

    (Note that the copyright holders' collective real approach in the music case has been to poison the search and content spaces as vigorously as they possibly can; sow the ground with salt. That's not something we'd really like to see with newspaper archives and public records, is it?)

    Search is a hard problem. It only-sorta works now. The existence of even semi-effective general search is totally incompatible with authoritarian social organizations because it makes it really hard to lie. I can't see any structural distinction between a right to be forgotten applied to search and a right to lie.

    I'd be much more impressed if the right to be forgotten was being applied to content, but it won't be, because that newspaper article is someone's property, and there's enough agreement with basically authoritarian personal-honour social structures that the offensive thing is just anybody can find this, not this thing still exists.

    289:

    I am conflicted about the Google ruling as both sides have grounds for their position. This arose from a bankruptcy relevation, in the UK this would have been disclosed in a report from a Credit Referance agency or by consulting the Court records so I feel the basis for the decision would not occur in the Uk. However there is an English Law, The "Rehabilitation of Offenders act", (which may apply in Scotland) which allows certain types of criminal offence to become spent and disclosure is no longer required, If Google lionked to a spent offence would they be liable for criminal or civil action? I presme that if Google refuse or fail to delete links on request they will be liable to damages. It will be in their own interest to attempt to remove the links as a demonstration of good faith in an attempt to limit any damages award.

    290:

    Para 2 - The period before an offender becomes legally rehabilitated may vary in Scotland for specific offences, but there is a Rehabilitation of Offenders Act yes.

    291:

    This sounds like something out of Delany's Triton:

    Twelve years ago some public channeler had made a great stir because the government had an average ten hours videotaped and otherwise recorded information on every citizen with a set of government credit tokens and/or government identity card. Eleven years ago another public channeler had pointed out that ninety-nine point nine nine and several nines percent more of this information was, a) never reviewed by human eyes (it was taken, developed, and catalogued by machine), b) was of a perfectly innocuous nature, and, c) could quite easily be released to the public without the least threat to government security. Ten years ago a statute was passed that any citizen had the right to demand a review of all government information on him or her. Some other public channeler had made a stir about getting the government simply to stop collecting such information; but such systems, once begun, insinuate themselves into the greater system in overdetermined ways: jobs depended on them, space had been set aside for them, research was going on over how to do them more efficiently—such overdetermined systems, hard enough to revise, are even harder to abolish. Eight years ago, someone whose name never got mentioned came up with the idea of ego-booster booths, to offer minor credit (and, hopefully, slightly more major psychological) support to the Government Information Retention Program: Put a two-franq token into the slot (it used to be half a franq, but the tokens had been devalued again a year back), feed your government identity card into the slip and see, on the thirty-by-forty centimeter screen, three minutes’ videotape of you, accompanied by three minutes of your recorded speech, selected at random from the government’s own information files.
    . . . So, finally (five years ago? No, six), he had entered one, put in his quarter-franq token (yes, it had been a quarter-franq back then) and his card, and watched three minutes of himself standing on a transport platform, occasionally taking a blue program folder from under his arm, obviously debating whether there was time to glance through it before the transport arrived, while his own voice, from what must have been a phone argument over his third credit-slot rerating, went back and forth from sullenness to insistence.
    He had been amused.
    And, oddly, reassured.

    This was written some time back in 1976 -- not bad for prescience, eh? Not too mention someone who didn't know bupkas about computers ;-)

    292:

    I'd call that a workaround of the law, pretty much like how the DMCA allows us to pretend there's some process for dealing with copyright violations while the net remains full of copyright violations.

    293:

    It looks like the derogatoryinformation.com address is available, if anyone wants to collect the funniest stuff removed from the web by EU court order. It could be the new Lolcats.

    294:

    I think that there's a British library/museum that is collecting data from across the entire Internet/Web as an anthropological exercise. So this legislation also impacts and potentially shuts down academic/historical research?

    A possible work-around is for Google (or whoever) to start automatically serving up a notice every time someone posts to the web: "Are you sure/double-sure you want to post this?" (I think this is the current nod-to-Legal subscriber info double-opt-in that's being used in several jurisdictions.) Everything prior would be combed through and discarded ... eventually. But at least, there will be less stuff to sort going forward. Auto-archiving/trawling could be yet another option/service: charge users to view/see any information more than a certain number of years old.

    295:

    A possible work-around is for Google (or whoever) to start automatically serving up a notice every time someone posts to the web:

    And just how does Google (or whoever) insert themselves into every post on the internet?

    Google indexes what is there. They do NOT act as or interact with gatekeepers except for a subset of the total Internet. Mainly those using Google's Apps.

    Are you saying the two ladies I support in their blog get to start vetting things for their commenters? Or if they write a blog post that references someone in Europe they must determine if that person doesn't want to be "known" any more? This blog is based in the US.

    296:

    This is the project ... U.K. only but if EU complies but UK doesn't then we might see a shift to more people using .uk as their long-term 'off-shore digital info storage destination'.

    British Library - Wikipedia ---

    "On Thursday, 5 April 2013, Lucie Burgess, the British Library's head of content strategy, announced that, starting that weekend, the Library will begin saving all sites with the suffix .uk- every British website, e-book, online newsletter, and blog, in a bid to preserve the nation's "digital memory" (which as of then amounted to about 4.8 million sites containing 1 billion web pages). The Library will make all the material publicly available to users by the end of 2013, and will ensure that, through technological advancements, all the material is preserved for future generations, despite the fluidity of the Internet.[22]"

    297:

    That's what I think the legislation is pointing toward ... and I don't see how they (EU) can make this workable easily or quickly.

    Mind you, we already know that online giants have died pretty quickly .. Netscape , AOL .. so this could be a serious issue if some company comes along that has the EU's blessing, then bye-bye Google.

    298:

    More likely Google remains strong in the US market, Baidu continues to get the support of the Chinese market, and some European search engine gets favorable treatment from European authorities.

    P.S. www.oldshames.com seems to be available, and would make a good meme website taking advantage of this, if anyone (other than persons subject to EU jurisdiction) cares to start it.

    299:

    What google and others can do, eventually, is split the data sets returned by "google.com" and "google.us" from those returned from local google searches in EU countries, run on local datacenter servers in those countries.

    If some miscreant should happen to use google.com from the UK and search on a US owned and resident server and get results banned in the UK, that's their own fault.

    So far, attempts to hold european branches responsible for data hosted in US branches have more or less failed.

    300:

    To me this seems like old farts (I'm officially one as of two weeks ago) think of the web as electronic pages. And more and more it's not. Look at this blog. How do they archive it? Save a snapshot of every post every day? Then try and figure out when the pages go "static"? There are some sites which don't even come close to the staticness of this site. Think of CNN or BBC.

    Again, I suspect this project and maybe the EU court seem to think of the Internet as a newspaper with a new edition every day. When in fact editions "printed" last week or 10 years ago might change totally overnight.

    Storage vendors are likely drooling at the though of how much they will get to sell to these folks.

    301:

    There seems to be a bit of confusion over the issues surrounding this Google ruling.

    As I understand it the ruling only refers to cases where someone makes a specific request. That request will be along the lines of 'When you search for X in Google the 5th listing is out of date and, while true, paints me in a bad light. I would like it removed'.

    This means that there is no requirement for convoluted algorithms searching through hundreds of names and having to tell a newspaper from a blog etc. However, it does require a human to view the request and the content that is linked to and make a judgement as to whether to remove the link.

    This is what Google doesn't want. They want to just to display links to things through automatic indexing and leave the legal side to the original publishers of the info. Real people cost money.

    There are a number of issues with the whole thing. The first is that currently one company has effective control over the visibility of websites. Things are seen if Google wants it and not because you want it. This caused problems a year or two ago when Google got caught artificially elevating their own products in the search listings.

    However, it is unworkable to demand that certain things be removed from the search listings despite not being illegal to publish. This would require a system to prevent it being re-indexed in the future too.

    There are too many variables for this to be handled by a company - even one of Google's size. For example, how do you decide who has a right to remove what? Does it only relate to searches and articles that refer to someone by name? What if two, or more, individuals are involved and one wants it removed and the other doesn't? What checking is going to be done to prevent someone having a political rival or commercial competitor removed from the listings against their will?

    Ultimately we now rely on web search for access to information but search is fundamentaly broken. However, it has never truly worked as it is an afterthought addon to a system that was never designed to handle it.

    So I believe that something needs to be done, but this isn't the solution.

    302:

    There won't be a problem with projects like this or other websites that make reference to the hidden content. The content itself isn't illegal. If it was then the publisher could be forced to remove it and but subject to further legal action. In that event it will cease to be linked to by Google a some point (and the linked to content won't be visible anyway).

    With links removed from Google to content that isn't illegal it is still allowed to exist and be visible it just requires people to be searching for it directly.

    Taking as an example someone who has a spent conviction, who doesn't want it to appear in the listings when someone searches for their name, it wouldn't be visible only for that search. If someone searches for '$name $conviction' it will still appear as they must already know about it to be searching for it.

    If there is a legal reason why the spent conviction shouldn't be show then it is nolonger an issue for Google but for the original publisher.

    Again, if someone tries to search the project that you mentioned they will get the information but only if they know what they are looking for - which is quite legal. The project are archiving the sites themselves and not the Google indexes that would allow you to search them as if you were using Google.

    303:

    Do you have any idea of how many name collisions there are in the US...

    Interesting thing about that, I just discovered this afternoon that my county library system had redone the way its computer system handled book reservations. (Because the old way was easy to use and worked, apparently.) Now patrons are required to have user names, over and above the card numbers that have previously been used and which have been just fine for the twenty years I've been using this library system. For some reason these must also be unique; obviously you won't have two or more guys named Mike Smith or John Anderson in a metropolitan area...

    The smallest population where I personally found a name collision was at a tiny specialist SF convention. I was running registration, and in about fifty people we had two David Andersons.

    304:

    Not total collisions but in my high school of 950 in my physics class there were 11 students. All in the same grade. 5 of us were David's.

    305:

    In the building where I work there were at one time 80 people. 10 of them were women. Of the 70 men 10 shared one first name, 5 shared a different first name, another 5 shared a 3rd, and we had surname collisions on all 3 first names as well.

    306:

    Back to the original problem, so what happens when, say, a potential employer sees your name linked to various and nefarious deeds in the paper? Not necessarily you, you understand. Just your name.

    307:

    Well, what I was trying to demonstrate was that there are substantial numbers of false positives generated by most such searches. Accordingly, making a decision based on a websearch alone seems to be to be ill-advised (or is that having sufficient software and statistics background to understand false positives talking?).

    308:

    It seems 'ill-advised', yes. So what actually happens in the real world? Obviously John Doe is still getting hired every day . . . Maybe it's only people with names like Ingelbert Humperdinck get falsely fingered and in that context false positives are perfectly acceptable. I knew there was a reason why my daughter's first name is 'Ellen' ;-)

    309:

    Interestingly, this relates to a story that appeared here this weekend:

    http://www.thestar.com/news/canada/2014/05/17/no_charges_no_trial_but_presumed_guilty.html

    Apparently, simply having contact with police in Canada is enough to get you a record, and many of the people who ask for a police record check assume that any record means you must have been in trouble.

    At 46, after decades of getting by on contracts in the animation industry and then working long hours as a chef, he decided to pursue a career that matched his abilities to his passion. He enrolled in George Brown College to become a nurse.

    “I was excited,” says Sinclair, now 50. “I wanted to go to Africa and work with Doctors Without Borders. Those plans have all been ruined.”

    In 2011, with thousands of dollars spend on tuition and two semesters on the Dean’s Honour List, Sinclair was forced out of the program when charges from 20 years before showed up on a mandatory police check.

    The charges — he had been rounded up in a raid on the comic book store where he worked — were never proven and were dismissed by a judge. By any measure, Gordon Sinclair was and is innocent.

    Hundreds of thousands of Canadians — perhaps millions — are vulnerable to seeing their ambitions crushed, reputations ruined and livelihoods shattered because of a lack of legislation across Canada to dictate what information police can or cannot release, a Star investigation has found.

    The situation has become critical, as more and more employers, volunteer groups, licensing bodies, governments and universities are requiring police checks that frequently disclose so-called non-conviction records — everything from simple contacts with police and 911 mental health calls to charges that were dropped, withdrawn or led to not-guilty verdicts due to lack of evidence.

    Detailed interviews with nearly a dozen Canadians with such records include an Ottawa man who lost a career with Air Canada — even though he was never charged or convicted of any crime — because police years earlier took note of him with a suspected drug dealer in the low income neighbourhood where he grew up.

    I find the last particularly ironic given the Toronto mayor's famous dealings with drug dealers.

    In August, 1991, when Sinclair was working as a part-time clerk in the now-defunct Dragon Lady Comics store, then on Queen Street in Toronto, police arrived one August afternoon and gathered comics they deemed to be obscene.

    They seized dozens of copies of Melody, a comic book about an exotic dancer.

    Produced in Quebec with financial support from the provincial government, the comic, tame by contemporary standards, raised eyebrows at the time and triggered police action. Sinclair, then 27, was charged with 33 counts of possession of obscene material for the purpose of selling. The charges — one for each copy of the comic seized — were laid against every staff member at the store, he says.

    “We didn’t think much of it because we knew there was nothing to the allegations,” says Sinclair. “It was nothing nearly as bad as Hustler (magazine).”

    The charges were soon withdrawn and Sinclair thought no more of it — until he applied for a police background check that would allow him to take training placements at a nursing home or hospital, a required part of his George Brown course requirements.

    The check came back with the long-since-tossed charges listed.

    “I was a bit shocked,” he says. “I went to the dean of nursing and explained the situation. She said I couldn’t do any clinical work and said, ‘We have to remain neutral.’ ”

    What I find interesting is that the organizations getting the data are essentially assuming that having any mention means that they should avoid you, just to be safe. Which they can get away with, as long as there are plenty of other people who want the position. The simple solution would be for a police records check to only include convictions, and possible charges still before the courts, but that would be undermined by newspaper reports that frequently list charges (but don't have follow-up links that report the acquittal).

    310:

    This is where one hires a lawyer and sues for discriminatory hire/fire practices (undue cause), emotional and financial/professional injury, etc. because due diligence in the hire/fire process means that the employer actually checked/verified the data, plus applied some common sense in interpreting it.

    Surprised this isn't already casebook law. Considering that the activity (online searches) is both widespread and has already been shown to produce serious measurable consequences, this has 'Supreme Court of Canada' written all over it.

    Any Canadian lawyers reading here?

    311:

    Decided to check whether SCOC had ruled on any online issues other than copyright: it has, and there's some interesting discussion on some of the side issues raised on the present topic.

    http://www.scc-csc.gc.ca/factums-memoires/33412/FM020_Respondent_Jon-Newton.pdf

    The SCOC site has all sorts of up-front info on 'visitor privacy', how/when visitor info gets turfed, etc. A really interesting read.

    312:

    search is fundamentaly broken.

    Search worked better 15 years ago, as I recall it.

    Ad-funded search depends on a balance between 1) showing ads, which provide revenue, 2) returning useful information, which supplies viewers for all the ads, and 3) search engine optimization, which brings commercial links to the top of the search without funding the search engine. We'd like to think #2 would win out, but Google makes more money if #2 is just enough to avoid mass defections (to Bing, Yahoo, Ask.com, etc.).

    313:
    Search worked better 15 years ago, as I recall it

    Number of folk on the net in 1999: 248 million

    Number of folk on the net March, 2014 (Estimate): 2,937 million

    (source http://www.internetworldstats.com/emarketing.htm)

    Number of pages first indexed by Google in 1998, 26 million.

    (source http://googleblog.blogspot.co.uk/2008/07/we-knew-web-was-big.html)

    Number of pages indexed last year, over 30 trillion!

    (source http://www.google.com/insidesearch/howsearchworks/thestory/)

    TL;DR: Search was a lot, lot easier 15 years ago ;-)

    314:

    Saw this, and thought the readers here might want to participate as voters and/or inventors:

    "UK prize lets public decide on world's biggest science problem. Winning challenge will be focus of £10-million Longitude Prize fund... more than 100 leading scientists have identified six major scientific problems, and the public are being invited to vote on which one should be made the focus of the challenge. The six problems include food, water scarcity, climate change, antimicrobial resistance, paralysis and dementia. The public voting will start on 22 May and will go on until 25 June. Inventors around the world will then have five years to work on the problem. The team that comes up with the best solution will receive the £10-million prize. Votes will be collected on the webpage of the BBC2 television show Horizon."

    Personally, I'd vote for whichever isn't getting adequate funding at present because all of these topics are important, and if all bases aren't adequately covered, we've a problem. (Dislike running science funding as a popularity contest. Yeah ... like the best science minds just happen to be in this year's 'cool field/group'.)

    315:

    I can't say I know what good 10 million pounds worth of solution to climate change would be. To solve the same problem, ITER had an initial budget of 10 billion euros, is way over budget, and may not work anyway.

    316:

    There isn't a "climate change" challenge. There is a "build an aeroplane that flies London-Edinburgh at comparable speed to today's aircraft emitting as close to zero carbon as possible" one. Whoever designed this understood the need for achievable goals, it seems.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on May 8, 2014 3:56 PM.

    Interstitial note was the previous entry in this blog.

    Some news about the Hugo voters packet is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda