Back to: Zoom | Forward to: Active down under

Where we went wrong

According to one estimate pushed by the FBI in 2006, computer crime costs US businesses $67 billion a year. And identity fraud in the US allegedly hit $52.6Bn in 2004.

Even allowing for self-serving reporting (the FBI would obviously find it useful to inflate the threat of crime, if only to justify their budget requests), that's a lot of money being pumped down a rat-hole. Extrapolate it worldwide and the figures are horrendous — probably nearer to $300Bn a year. To put it in perspective, it's like the combined revenue (not profits; gross turnover) of Intel, Microsoft, Apple, and IBM — and probably a few left-overs like HP and Dell — being lost due to deliberate criminal activity.

Where does this parasitic drag come from? Where did we go wrong?

I'm compiling a little list, of architectural sins of the founders (between 1945 and 1990, more or less) that have bequeathed us the current mess. They're fundamental design errors in our computing architectures; their emergent side-effects have permitted the current wave of computer crime to happen ...

1) The Von Neumann architecture triumphed over Harvard Architecture in the design of computer systems in the late 1940s/early 1950s.

In the Von Neumann architecture, data and executable code are stored in the same contiguous memory in a computer; in Harvard Architecture machines, data and code have separate, disjoint areas of memory and never the twain shall meet. Von Neumann architectures are simpler and cheaper, hence were more popular for about the first forty or fifty years of the computing revolution. They're also more flexible. Allowing data and executable code to share the same address space allows for self-modifying code and for execution of data as code — sometimes these are useful, but they're horrible security holes insofar as it permits code injection attacks to happen. There have been some recent moves by the likes of Intel in their more recent architecture iterations to permit chunks of memory to be locked to one function or the other, thus reducing the risk of code injection attacks — but it's too little, and much too late.

2) String handling in C uses null-terminated strings rather than pointer-delimited strings. A null character (ASCII 0) denotes the end of a string (a block of adjacent memory cells containing one character of data each) in the C programming language's memory management (cough, choke) system. What if you want to write a string containing ASCII 0, or read or write beyond a null? C will let you. (C will not only let you shoot yourself in the foot, it will hand you a new magazine when you run out of bullets.) Overwriting the end of a string or array with some code and then tricking an application into moving its execution pointer to that code is one of the classic ways of tricking a Von Neumann architecture into doing something naughty.

In contrast, we've known for many decades that if you want safe string handling, you use an array — and stick a pointer to the end of the array in the first word or so of the array. By enforcing bounds checking, we can make it much harder to scribble over restricted chunks of memory.

Why does C use null-terminated strings? Because ASCII NUL is a single byte, and a pointer needs to be at least two bytes (16 bits) to be any use. (Unless you want short strings, limited to 256 bytes.) Each string in C was thus a byte shorter than a pointer-delimited string, saving, ooh, hundreds or thousands of bytes of memory on those early 1970s UNIX machines.

(To those who might carp that C isn't really used much any more, I should reply that (a) yes it is, and (b) what do you think C++ is compiled to, before it's fed back to a compiler to produce object code?)

3) TCP/IP lacks encryption at the IP packet level. Thank the NSA in the early 1980s for this particular clanger: our networking is fundamentally insecure, and slapping encryption on high-level protocols (e.g. SSL) doesn't address the underlying problem: if you are serious about comsec, you do not allow listeners to promiscuously log all your traffic and work at cracking it at their leisure. On the other hand, if you're the NSA, you don't want the handful of scientists and engineers using the NSF's backbone to hide things from you. And that's all TCP/IP was seen as, back in the early 80s.

If we had proper authentication and/or encryption of packets, distributed denial-of-service attacks would be a lot harder, if not impossible.

DNS lacked authentication until stunningly recently. (This is a sub-category of (3) above, but shouldn't be underestimated.)

4) The World Wide Web. Which was designed by and for academics working in a research environment who needed to share data, not by and for banks who wanted to enable their customers to pay off their credit card bills at 3 in the morning from an airport departure lounge. (This is a whole 'nother rant, but let's just say that embedding JavaScript within HTML is another instance of the same code/data exploit-inviting security failure as the Von Neumann/Hardward Architecture model. And if you don't use a web browser with scripting disabled for all untrusted sites, you are some random black hat hacker's bitch.)

5) User education, or the lack of it. (Clutches head.) I have seen a computer that is probably safe for most users; it's called an iPad, and it's the digital equivalent of a fascist police state: if you try to do anything dodgy, you'll find that it's either impossible or very difficult. On the other hand? It's rather difficult to do anything dodgy. There aren't, as yet, any viable malware species in the wild that target the curated one-app-store-to-rule-them-all world of Apple. (Jailbroken iOS devices are vulnerable, but that's the jailbreaker's responsibility. Do not point gun at foot unless you have personally ensured that it isn't loaded and you're wearing a bulletproof boot.)

In the meantime, the state of user interfaces is such that even folks with degrees in computer science often find them perplexing, infuriating, and misleading. It's hardly surprising that the digital illiterati have problems — but a few years of reading RISKS Digest should drive even the most Panglossian optimist into a bleak cynicism about the ability of human beings to chew gum and operate Turing Machines at the same time.

6) Microsoft.

Sorry, let me rephrase that: Bloody Microsoft.

Specifically, Microsoft started out on stand-alone microcomputers with a single user. They took a very long time to grasp multitasking, and much longer to grasp internetworking, and even longer to get serious about security. In fact, they got serious about memory protection criminally late — in the early to mid 2000s, a decade after the cat was out of the bag. Meanwhile, in their eagerness to embrace and extend existing protocols for networking, they weren't paying attention to the security implications (because security wasn't an obvious focus for their commercial activities until relatively recently).

We have a multiculture of software — even Microsoft's OSs aren't a monoculture any more — but there are many tens or hundreds of millions of machines running pre-Vista releases of Windows. Despite Vista being a performance dog, it was at least their first release to take security seriously. But the old, bad, pre-security Microsoft OSs are still out there, and still prone to catching any passing worm or virus or spyware. And Microsoft, by dropping security support for older OSs, aren't helping the problem.

Anyway, I'm now open for suggestions as to other structural problems that have brought us to the current sorry state of networking security. Not specific problems — I don't want to hear about individual viruses or worms or companies — but architectural flaws that have contributed to the current mess.

Where did we go wrong?

212 Comments

1:

Nitpick: while the first few C++ compilers may have used C as an intermediate language, I'm pretty sure today's compilers go straight from C++ to their own internal representation, and from there to assembly/object code. No C involved.

Other than that, yeah, you nailed some of the highlights.

2:

Just replying to your first point: I don't think the Harvard architecture is really that much of a win in security (although it is a win). The return-into-libc game has evolved enough that it's apparently possible to execute arbitrary computations just by providing the right return addresses in an exploit to a sufficiently complex program. See https://docs.google.com/viewer?url=http://cseweb.ucsd.edu/~hovav/dist/geometry.pdf

Sound scary and awesome? It does to me. (Also, that paper's title is pretty goff. Beware.)

3:

what do you think C++ is compiled to, before it's fed back to a compiler to produce object code?)

That's not a very accurate description of the way C++ is compiled, and hasn't been since the early 90's. OTOH, this doesn't mean that C++ is any good. C++ gives you the same suicide gun that C does, and then throws in a chainsaw, a poison pill, and a grenade to boot.

You need to add: e-mail. Another variation on your (3), essentially, as the email protocol has no authentication of the sender, making it a playground for spammers and abusers. Workarounds have been added, but really we wouldn't even need Bayesian spam filtering if the email architecture weren't so busted to begin with.

I also notice that your list of problems breaks down into two meta-problems: 1) Baked-in assumptions from the days when all computer users were experts and could basically be trusted. (Your #s 1-3) 2) Baked-in assumptions from the days when all computers were standalone machines, or only networked with a small number of other trusted machines. (Your #s 4-6).

4:

Note that the Von Neumann architecture also also allows you to load programs -- at some point, memory has to be written as data before it can be executed.

The problem was that most people didn't make it exclusive.

Some PDP-11 models had a split instruction and data space; they were still Von Neumann architectures.

I disagree with (2) as well, but it's more about your take on it; I've used systems that had bounds-checking in hardware, and also had C compilers on them. And even used C compilers that turned all pointers into <base, length> objects. When you get down to it, I think the real problem related to (2) is that C is a fine system implementation language, but not a good application implementation language.

5:

Even with the most secure architecture possible, the social engineering schemes to get access to personal data will still work. And that is before criminals pay for personal data via bribery or blackmail.

I think this post would be more interesting if you had any breakdown between losses due to architecture versus those due to other means. For example, when a laptop with the personal data of millions of people is stolen, that doesn't really count as an architecture problem, but this sort of security breach might be larger in impact than packet sniffing.

6:

I actually think that our approaches to security is all wrong. I wrote a blog post about this a couple of years ago: http://piaw.blogspot.com/2008/08/security-boondoggle.html, and pointed to Eric Rescorla's excellent talk: "The Internet Is Already Too Secure" http://www.rtfm.com/TooSecure-usenix.pdf

I think his talk sums up the entire situation we're in.

7:

How about the fundamental lack of interest in authentication at the user-to-user communication level? Phishing scams work, for the most part, on the idea that the receiver trusts the sender enough to do, well, something. It's not just that users are remarkably susceptible to requests from anybody that would make a request (a separate problem); they don't even have the tools available to know who made that request in the first place. If a local user is not capable of differentiating requests from his or her sysadmin and your friendly neighborhood scammer, then something is wrong. And we don't even seem to be entering the tool-creation stage of solving this problem.

For instance, PGP over email has been a horrible failure, even if some of us practice it religiously. But nothing else even comes close.

8:

Another architectural problem is using fixed secrets for authentication when cryptographic protocols are available.

For example, people still use passwords log into computer systems. Schemes like S/Key should be used.

The same problem is also present in multiple places outside of computers:

Mechanical keys for locks.

Written signatures on checks. Anybody I write a check to has enough information to forge a check from me.

Credit cards have a few fixed pieces of information on them: a 16 digit number on one side, a month, a name, and a 3 or so digit number on the back. The signature on the back is not checked in practice. If someone has knowledge of these few fixed pieces of information on my credit card, that is interpreted as evidence that I have agreed to transfer to them whatever quantity they claim I have agreed to transfer to them. Instead, the card should have a computer inside and a private key, it should keep the private key private, and it should only emit signed statements about who I have agreed to pay and how much.

Voting. There are good cryptographic voting schemes, but instead we have voting machines that are either paper based or electronic, but either way they cannot be audited by the voter.

9:

I think an issue with C that caused as much or even more problems than just null terminated strings are all the API they provided to read data from a stream that provided no means of specifying a maximum length of data to read. I think this was an oversight that has cost us a great deal.

10: 3 is a bust, sorry. We have IP encrypted at the packet level in the form of IPsec - and it's so very not worth it. Encryption at that level is hard to manage, wasteful of CPU time and user attention, and doesn't really gain you anything over encryption at the application level. The IP layer just doesn't correspond very well to the authorisation domains you want to use encryption across.

A similar technology that you hint at: routing authentication at the packet level would be an interesting security technology - ensuring that all packets were sent to hosts that expected to receive them. It is also many years beyond our current technology level. We don't yet have that kind of computing power. We may never have it: user demand keeps expanding, often faster than technology can keep up, cryptographic authentication at the packet level is expensive, and we don't have a spare factor of 2 to 5 or so in the CPU power of network equipment to make it happen.

In economic terms: as long as bandwidth is a scarce resource, users will not in general accept a slower service when a faster, less secure one is available. It's hard enough to get them to accept slower secure services for things which they really want to be secure.

11:

I'm not an IT professional. Actually I'm a truck driver. However, it seems to me that the old Amiga had something going for it that we lack now days. It had OS in firmware. Fairly unhackable if I recall.

12:

I'll add to your list:

The Internet's End-to-End principle. It's unique in that it pushes almost all network intelligence out to the nodes on the edge. Compare it to a) the telephone/GSM network or b) an IBM SNA mainframe network. The ability of a rogue/hostile node to attack others is fairly limited, unlike TCP/IP where it's open slather.

Of course, this gets back to your #5, both are the digital equivalent of a fascist state. Users and developers have almost no freedom other than what the network operator allows them.

Like all of your points, we've chosen utility/convenience over security/correctness. There's a deeper lesson there for the security profession, but I'm not sure what it is.

13:

Sorry, no, that doesn't buy you much. For one thing, either there's a way to write to it, or you don't get OS upgrades. For another, problems happen with applications, and with (as Charlie mentioned) writable memory that can be executed.

Consider the iPhone, which also has a read-only OS. That hasn't stopped people from finding bugs to exploit (all jail-breaking is done via security bugs, remember).

14:

ISTM that most of the architectural problems result from a refusal of the implementors at a given architectural level to deal directly with security problems at their level, but instead to insist that programmers at higher levels deal with them, usually in the name of performance. For example, the Linux kernel implementors refused to consider using a microkernel architecture because of the performance impact, pushing the implementation of OS access authentication up to the user level, and resulting in lots of places where its easy for a simple bug to cause a security hole.

And don't get me started on Intel's abandonment of multi-ring (n > 2) hardware security architectures (I know the names of the people responsible, and I even know where some of them live). The result was to leave non-kernel OS functionality exposed to user-level code. Multi-ring or capability architectures allow much finer authorization models, even allowing some protection against malicious access by administrators.

The World Wide Web. Which was designed by and for academics working in a research environment who needed to share data
And many of whom believed strongly that the availability of information would foster an atmosphere of collegiality amongst all users. Almost 20 years ago, I attended a talk by Andy Van Damm, one of the early proponents of hypertext. At one point I asked him what sorts of protection would be available to prevent vandals from defacing or subverting text and links on the internet (this was just as the WWW was starting to catch on outside of CERN). He was indignant at the idea that anything would be allowed to inhibit any user's ability to edit or overlay the content of any other user; this, he thought, was an unacceptable infringement on the freedom of information. I still find this naive attitude among Computer Science academics (outside the security field).

15:

A lot comes down to incentive. The demand is for features, not security, until security becomes a problem. If the vendor of an e-mail program were liable for the damage from viruses launched by people clicking on attachments in that presentation UI, the incentives would be considerably different. (And also chilling to a lot of innovation-- having to pay for software flaw insurance would be a general disincentive. Open source would look considerably different under that architecture.)

16:

I have a hard time imagining a world where computers where based on the Harvard Architecture. Much of what modern operating systems do now would be utterly impossible. Worse, all application development would be utterly gated by the computer manufacturer. No more garage startups when you have to buy special software to write a program.

17:

Authentication, authentication, authentication - and unfortunately the efforts of a minority of digital libertarians to stop any useful progress ('cos the spooks / RIAA / etc are gonna get us).

4 - good point of C being a non-optimum language for application programming. That's a general 'what have we done wrong' point, which is that overly powerful abstractions have won out over safer ones. The culture of snobbery towards application development, and 'dumbed down' languages like ADA doesn't help.

(That said, I can understand why there was a push towards assembler and C / direct memory manipulation languages in the 80s - desktop machines were 'slow', and users expected real time performance)

18:

Our present encryption systems assume that P!=NP, although there is no need. It might be expensive to fix this, but the hole is there all the same. Quantum computers might also qualify as an architectural security hole.

19:

I think your core point about Microsoft is sound, but misses a few major fuck ups that were ramifications of their behaviour that bear bringing out.

  • data as program: you mentioned JavaScript in HTML, but Word macro viruses are older than that. Microsoft succeeded in creating the first multi-platform viral substrate. Yay.

  • insecure-by-default email

  • binding the browser into the operating system, and then making silent installation of ActiveX controls the default.

  • I was going to mention their incompetence at making mobile OSs (they keep trying to make a phone into a PC) but that's not really a security issue as such.

    However, another good one is the way that the content industries have the hooks in the way our computers work. This has profound consequences for the life of our cultural artifacts after the last authentication server gets switched off, something alluded to in Glasshouse but I think worth reiterating here as a cultural security risk.

    20:

    A couple of other architectural issues.

    First is related to your (2) issue: when you make a function call, why put the return address into a callee-writable location? (That is, on the stack, or in a writable register.) Sure, for calling more than one function, you need to be able to stash it somewhere, but why not have a block of memory that only the CPU can write to when it's storing a return address? Similarly, parameters to function calls -- why put them in the same block of memory that function-specific data goes? (Okay, this one does make more sense, but I'm suggesting that maybe the whole concept here was a flaw. And specific to C here, there's no way to know how many parameters there were. Not a problem for functions with a fixed number of arguments, but you can do variardic functions safely. C and C++ don't.)

    Second, for IP, the concept of how you connect to another host seriously flawed. Traditionally, when you wanted to connect to, say, SMTP on another machine, you looked up what the port was on your system (e.g., using /etc/services).

    21:

    @14:

    ISTM that most of the architectural problems result from a refusal of the implementors at a given architectural level to deal directly with security problems at their level, but instead to insist that programmers at higher levels deal with them, usually in the name of performance.

    I think you've nailed it right there: security, real security, is always Somebody Else's Problem. So despite everyone agreeing that it is, indeed, a big issue,what happens in practice[1] is that it gets implemented as an afterthought after severe constraints have been baked into Best Practices. Kind of like bubbles, in a way :-)

    [1]I'm thinking about Vinge's Rainbows End where Homeland Security requirements have bloated each individual flip-flop into a construct of several thousand transistors.

    22:

    Here's another good one: lack of billing. Perhaps part of #4, but if there had been a way to charge for sending more than say 1000 emails in a day, then spam would never have got to where it is. To be able to charge for things, we would have needed not just better encryption but also the authentication infrastructure on top of it.

    I really dislike the thoughts that I assume must run through spammers and other con artists brains. But they're not that different from those that seem to come out by accident from CEO's mouths either. Maybe the real lesson there is that of needing better education in what each society values.

    ~Matt

    23:

    I think it might be because every step of the mighty infospheres evolution has more or less been developed buy some random geek in garden shed/bedroom/university/e.t.c and frankly most of these things started off as pet research projects or "wouldn't it be cool if I cobbled this bit of code onto here and see what happens?"

    You can have any two of cheap, easy to use and secure but not all three. And the obvious one which has been chosen since time immemorial is cheap and easy to use, that might as well be a mantra for everything i.t related because it's certainly what works.

    24:

    Matt @ 22 - maybe it's because I live in a big city, but I receive about two pizza flyers, some sort of building services advert, and some sort of cleaning services advert every two days through my door. All must cost sender money.

    By contrast my spam filter catches all but about two emails a month.

    Give me email spam any day. Real spam is harder to filter. Mind you I don't often get viruses from physical spam...

    25:

    In regards to the latter part: Tim Berners-Lee regrets the URI syntax. Related is speculation as to why DNS isn't in most-significant-order(i.e. 'org.wikipedia.www', 'com.yourbank.mail'), although the benefit is probably more variable in the face of smart address bars.

    I've also seen a quote somewhere on reddit, purportedly from a old hacker, that 'C is a decent language, but the C standard library is precisely how not to use it'.

    26:

    I think that there is a deeper element to it all - human psychology: trust as a social foundation, and our unwillingness to live our lives in virtual bunkers.

    No matter how heavy your authentication, your selective gateways for restricted escalation of limited rights, or whatever, ultimately we want computers to do what we want tell them to do. And we trust bad people too easily, and we accept that we do that sometimes.

    I like geeking about security flaws as much as the next cypherpunk, but really as long as we have people who are lured by the possibility of the fortune that some dead zillionaire left for them in nigeriaville, or in a chance to see topless photos of the starlet du jour, we will have these problems. Our psychology is not suited to the power that our computers give us.

    I think that there could be some hope in collaboratively filtered reputation networks ("8 of your smartest friends liked this widget, none marked it suspect"), but I don't see that technology maturing, much less becoming ubiquitous, without an economic driver far and above current fraud rates.

    Hell, when Microsoft can actually charge more money for an -additional product- that brings some marginal security to their existing near-monopoly OS, I think it is clear that the game is rigged - security is seen as a premium, not as an essential.

    27:

    If you want to look at the ID theft/fraud by impersonation/reputation damage, I think the key confusion is treating the SSN as both an identifier and an authenticator, and rolling together all records which use the same SSN.

    There's a secondary problem, which is that incorrect database records are treated under US law as a problem for the subject of the record, rather than the compiler of the records. This means that inaccurate records accumulate, and the victim, without much recourse, has to get things fixed.

    28:

    Charlie, I recall you were writing bank payment software in a past life. What language were you using and how did he ensure security for your company's clients?

    29:

    I think it's far more likely that the lack of encryption at the network level derived from the oh-so-slow machines that were doing network stuff at the time. Bounteous CPU was a decade later.

    B>

    30:

    @25:

    I've also seen a quote somewhere on reddit, purportedly from a old hacker, that 'C is a decent language, but the C standard library is precisely how not to use it'.

    Whatever happened to "At worst, C is the second-best language to program in for any given environment"?

    31:

    Ref #2. The problem isn't null terminated strings. It's a lack of memory bounds checking. It's memory management. C arrays have the same problem; I can create a[100] but then write happily to a[200].

    The problem, however, is even worse than just memory management. It's sanity checking of all input. In C, yes, this can be memory manegement - validate the n+200th location is within the defined scope of the variable. On the web we have sql injection, buffer overflows, insufficient permission checks (yes, I've seen CGI programs were a parameter of ../../../../../../../etc/passwd would display the passwd file) and so on.

    32:

    As geeks we're used to thinking of problems in terms of technology. We build better/stronger/faster/more robust solutions to problems. But the problem isn't necessarily technological, it's social.

    As such, I think you've actually missed the biggest problem, and it's not technology. Identity theft is not a technology problem. Technology may compound it, but the problem is with lack of controls and validation with the credit reporting agencies, with credit card companies, with lenders etc.

    Why should knowing 4 things about me be able to get a loan in my name and make me responsible for it? Cause it to show on my credit history? I had people chasing me for loans taken out in the US before I even moved to that country!

    Technology is merely an enabler to underlying social control issues; it can make it easier and more automated to perform the attacks, but the attacks are only possible because of the lack of controls - and those controls would still be lacking if we were stuck with pen'n'paper.

    PEOPLE are the problem.

    33:

    "Related is speculation as to why DNS isn't in most-significant-order(i.e. 'org.wikipedia.www', 'com.yourbank.mail')"

    Ah, JANet.. how I miss you! In the late 80s I had an email address similar to USER@ecs.ox.ac.uk - that always struck me as more sensible than the internet format :-)

    34:

    I'm stupid. I, of course, meant USER@uk.ac.ox.ecs - DUH!

    35:

    Johnny@24: You're painfully naive if you think your spam problem is at all comparable to snailmail flyers.

    Imagine, for a moment, if sending you a paper flyer was free. Completely free. Absolutely zero-cost free. Why? Because you pay for incoming mail. People do not pay to send you mail, you pay to receive mail.

    You pay to store that mail until you collect it. You pay to collect that mail. You have to discard that mail. That mail works, very hard, to pretend it's real mail - whether that means plain envelopes to make you think it's a legit bill, or pretending to be from your friends by forging their addresses.

    A lot of this mail is, itself, harmful to you to receive. A small chunk of it is actually illegal to receive. But you receive it anyway.

    And it's free, for the sender. So senders send hundreds of MILLIONS of flyers at a time.

    The volume is so large that it crushes post offices. Not only is your mailbox full, but you can't reach your mailbox through the flyers spilling over out of everyone else's mailboxes. It becomes a multi-BILLION dollar a year expense just barely keeping ahead of the crush so that the mail can keep flowing at all. And even then, after the post office has raised YOUR prices more then tenfold (because, after all, the ONLY cost for all this is paid by you) to keep up, you STILL have to put time and effort into filtering AND into checking the filter AND dealing with the stuff that gets through.

    If spam were the same as snailmail, that's what you'd be dealing with.

    @Charlie: My vote for "biggest design failure" is SMTP. The design of email, from the ground up, allows griefing and encourages malicious activity at every stage. It relies pretty much entirely on trust and the assumption that the person you're talking to is honest and wants to deliver a real message.

    36:

    "...in the C programming language's memory management (cough, choke) system."

    I nearly spit my drink, laughing.

    I would emphasize END USERs, to be sure. Build in all the security you like, you're still going to get the office moron who, when presented with the dialog that says "Extract 'Notavirus.promise' to C:\windows ?" will do exactly the wrong thing.

    37:

    Scott@25: The problem with reverse URLs is that, at that point, you actually have the LEAST important part at the start of the URL.

    .com, .net, .org, or .uk - those are the parts I care the LEAST about when I'm reading a URL or an email address.

    You read URLs left to right: protocol tells you what you're doing, machine tells you what's doing the work, domain tells you where you are, and TLD is (in practice) entirely irrelevant except as a way to make sure someone's not faking this blog on "antipope.cn"

    Reverse the URL, and suddenly I've got the protocol telling me what I'm doing, then irrelevant crap, then relevant stuff, then the stuff that's most relevant to the first thing I read.

    The ubiquity of "www." is a separate issue, but it's actually a pretty handy identifier for meatspace signs to show that something is a web URL without needing a protocol identifier.

    38:

    The problem with URLs, period, is that you shouldn't have to care. As I alluded to earlier, directory services are a major failure on internet.

    39:

    So how do you propose to handle networking without addresses, again?

    40:

    I am not going to even bother to try to come up with a solution -- I hang out with BIND authors, so I'm the dumb one about all this :). But that doesn't mean I can't recognize and understand the problem.

    You can look at the history of telephony for some ideas, here: initially, every connection was made via operator. Later, automatic dialers came into play -- but every house had a unique telephone book (e.g., if you called me, it might be 555-1234, but if your neighbour called me, it might be 555-4321). After that, they normalized for origin, so every caller could use the same phone number. Then smarter telephones came along, and you could look at a name in your directory and enter it. Now, smart cellphones accept tel: URIs, so you, as the end user, don't have to see the phone number.

    The big problem is directories -- the mapping from "Charlie's Diary" to "http://www.antipope.org/charlie/blog-static/index.html". Again, I don't have a solution for this... but why you think that anyone should need to know this mapping? (Well, any random human being.)

    41:

    Where we went wrong: our species learned how to make and use tools that can be used in more than one way.

    That's really what we're saying here. The von Neumann architecture, the TCP/IP stack, the World Wide Web... they're all tools. They let us do wonderful, even miraculous things. Indeed, I'd argue that the development and widespread implementation of the World Wide Web is at least as significant an event as the inventing of the printing press.

    All that said, tools can be used to cause harm, or misused because of error or malice. If you create a technology which increases your ability to affect reality, and which allows you to be creative as to nature of the resulting effects, congratulations: you've just created something that can be used for ill purposes.

    What are the actual problems facing us in the computer world? We have people who intentionally or unintentionally cause damage and disruption, and people who intentionally seek to steal and defraud. Can we create systems which make that activity more difficult? Sure... but let's not try to solve the problem of "When I cut myself shaving, I bleed", by using the "Apply tourniquet to neck" approach.

    42:

    I've got to disagree with #1.

    "1) The Von Neumann architecture triumphed over Harvard Architecture in the design of computer systems in the late 1940s/early 1950s."

    I'm a biologist, not a programmer, but you know, DNA resembles the Harvard architecture. And we've got viruses. Lots and lots of viruses. Retroviruses, even, that write themselves into DNA. But most viruses depend on data and processor to be two different physical entities, and they do just fine.

    So, no, the architecture wouldn't have made a difference. We'd just have different problems (side note to SF writers: Von Neumann-architecture genetics might get interesting).

    Considering that viruses appear to be more common than bacteria, just as parasitic insects outnumber non-parasitic insects (actually, most species are parasites), I think that the Web is just imitating nature.

    If so, the solution isn't the mythical perfect architecture, it's to keep breeding systems that can resist various forms of computer bug.

    Breeding?

    Yep. That's one solution to the parasite problem: sex. Scramble the DNA/hard memory every generation, and hope that the scramble/mix and go form is a little less scammable than the previous generation. Imperfect replication seems to be nature's solution to the Black Swan problem of parasites, and it does have at least a billion years' track record.

    A totalitarian computer architecture? Not a good solution. After all, how many totalitarian regimes have proven totally resistant to corruption?

    43:

    @41:

    We'd be a lot better off if we had an organized way to identify budding sociopaths and remove them from the population rather than rewarding them with placement as CEOs, politicians or even cybercrime bosses.

    But, absent a wholly foolproof way to identify and dispose of people who exist only to harm others, designing tools that are harder for sociopaths to misuse is an inescapable requirement.

    44:

    The URL is the "phone number" - and you're right, needing to remember phone numbers is a pain in the ass - but the ability to look at a phone number and realise that "Pizza Pizzza" is not actually who you wanted (or who they say they are) is invaluable. Even signing and certificates don't help if you hit a legit site that isn't what you wanted.

    Besides, most people don't use URLs, at all. There's the bookmark (which hits the "everyone uses a different phone number to hit the same location" problem) and the search engine - very few people ever type a URL at all. And yet, despite that, the URL serves a very useful purpose to people who know how to read 'em, just like phone numbers or event logs.

    45:

    Johnny99: before I gave up and outsourced the spam problem to Gmail, my server was routinely catching 20,000 spams per day, spiking to 60,000 during a joe job attack. That's for about five users, myself being the main one.

    Your lack of spam is the anomaly, not the norm.

    46:

    Adam, SSNs are a specifically American problem. We don't use 'em for identification or authentication purposes in the UK -- they're merely a hook into the social security system -- ergo, they're sod-all use for identity theft.

    So we could profitably examine what it is about the American SSN system that makes it so dangerous ... or rather, someone who's familiar with it could do that?

    47:

    Regarding C++:

    The problem is not whatever is used as intermediate code, be it C or something more abstruse. It is possible, albeit bogglingly difficult, to write secure programs in C, and a compiler will be better than a meatbrain at doing just that.

    The problem is that at any point in a C++ program, you can drop in to raw, uncensored, security-dead C, and it won't even bother to ask "Are you sure?"

    It is said that any language powerful enough to be really useful must allow you to shoot yourself in the foot if you try hard enough. C demands that you use your toe as a foresight.

    J Homes.

    48:

    Short version: In the USA, the SSN isn't just an identifier, it's a password. Possession of the SSN is considered sufficient proof of identity for many things - because the intention was that the SSN be the "password" to identify you, and then, lacking anything else suitably unique, they went ahead and used it as the "username" too.

    49:

    Much to comment on, but I'll only comment on the stuff I have first hand knowledge (and even so, apologies for length).

    It is probably worth recalling that TCP/IP was never intended to be much more than a research project used by academics and (primarily US) military and contractors. Much of the design of protocols in that era made certain assumptions that today wouldn't pass the giggle test (passwords in plaintext over the wire? no way to even pretend to do authentication?). And besides, all this silly TCP/IP crap will be thrown out when the telcos start deploying OSI TP4/CLNP real soon now.

    While the NSA played a role in declaring strong cryptography a munition, the die had already cast long before on a lack of security in pretty much all the protocols that were in vogue at the time (IP, TCP, UDP, SMTP, FTP, Telnet, etc.) The sad part is that the IETF continued with the "just trust everyone" model of security long after it was known to be problematic.

    Also, depending on where/how you do it, adding authentication and/or encryption to packets would most probably increase the likelihood of D(D)oS attacks. For example, if you put authentication/encryption at the end systems, you'd be making end systems do (much) more work, thereby reducing the number of packets you'd have to send to flood the system. Protecting against D(D)oS is hard.

    On DNS not having authentication, the fact that DNSSEC is just now seeing significant deployment is partly due to the fact that authenticating DNS data doesn't really buy you all that much given the way the DNS is currently used and it costs a lot in terms of system resources (and software developer brain cells). It wasn't until Dan Kaminsky figured out a way to make remote cache poisoning effective that folks took the need to strongly protect DNS caches seriously.

    However, more to the point, I figure the part of the answer to your question "where did we go wrong" was in the assumption (or more likely, lack of consideration) that the bad guys were stupid and wouldn't notice/take advantage of the obvious vulnerabilities. It took events like the Morris Worm for people to even begin to take security on the Internet seriously. Security is hard, far easier to just go shopping.

    (You may notice a pattern: people have to get slapped in the face hard before taking things seriously. I suspect this is the fundamental problem.)

    50:

    On the other hand, at least ESR has apologised for sendmail.

    But yes, although I would expand it to cover most of the *P stacks. NNTP has the same problem (although it is rare now), SNMP is continually problematic. And the list goes on.

    NTP is the rare exception - sometimes.

    As someone else pointed out, it all boils down to an essential naivety amongst hackers that people will play nice. And yet, they still play cut-throat Diplomacy.

    51:

    @33, I too remember JANet and the Colo(u)red Books, albeit not particularly fondly. When I asked Paul Mockapetris (the guy who defined the DNS) why most-to-least significant, this he grumpily pointed out 3 things:

    1) the DNS uses the same format as postal addresses in the US and many other countries, i.e., [house] [street] [city] [state/province] [country].

    2) backwards compatibility: most users were used to dealing with HOSTS.TXT which had single labels. When the DNS was initially deployed (and even now), it was trivial to simply append a "default domain", e.g., isi.edu, and have most connections go to the right place. If the DNS were least-to-most, you'd have to parse the destination string (e.g., email addresses, looking for the non-escaped '@' sign) and know a priori where to insert the default domain.

    3) "it's a presentation-layer issue you dolt, if you don't like it, rewrite the UI to display the labels in whatever order you like."

    I learned not to ask PVM questions after a while... :-).

    52:

    I have to agree with the Biologist in the room. We have a virtual environment of tools replicating functions of biology (Virus). Darwinian evolution mapped to virtual constructs (got your virus updates for the latest malware signatures today?).
    There is both tremendous flexibility and risk inherent with interacting with 'evolving' systems. How much baggage do we carry in our genomes that came from such battles? Some isn't baggage and helps makes us what we are.. Its possible the same will hold true for our computer systems. As more and more of a computer 'appliance' is taken up with dealing with threats of greater and greater complexity at some point intelligence crops up. For what identifies a threat better than intelligence? You either end up with systems that are so unwieldy that they haven't the spare cycles to process their function or intelligence comes along to make identifying threats easier. Our current computing tools map out to a bit past the first multicellular lifeforms, no intelligence yet but a lot of communication between cellular clusters and lots of attempts at subverting function. I think the first wild AI will crop up in all these systems interacting as the complexity level of threats grow.

    Better designs might have helped but threats have a way of examining solution spaces the designer never envisioned. Different design choices made at the beginning of the tech revolution might have mitigated some current threats... but much is only known in retrospect. Thinking of all the nefarious ways your design can be subverted is like a central authority making optimal economic decisions for everything. Some things can only be fully tested out in the wild, be it the best price on items or your 'secure' design.

    One approach to eliminate SOME of the threats from websites, programs, etc.. is to run virtual copies of Linux or Windows that are created from golden images on the fly each time. These virtual machines can be infected but that infection doesn't make it into the golden image. Several web browsers are implementing a similar 'sandboxed' approach to limit their vulnerabilities. I recall several SCI-FI stories where the AI tests incoming information in a 'sandboxed' section before 'ingesting' the data...

    In the end however the vast majority of the world will continue on WinXP level tech providing plenty of fertile ground for the virtual arms race. Some might opt for Linux or Mac that aren't as heavily targeted to mitigate some of their exposure. In the end however I think we're just participating in an arms race that won't stop until the tools wake up and take the level of play out of our reach.

    53:

    It has been mentioned in this discussion that most hacking is the result of sosial engineering. So what is the main architectural flaws of the brain that allow this?

    54:

    Charlie,

    Much vertical head movement here as I read your list.

    I would add two things to your list:

    "Our irrational fear of forwards compatibility"

    Whenever we make a change, we concentrate 100% on backwards compatibility, despite the fact that the number of programs that can affect is finited whereas a potentially infinite number of programs have yet to be written.

    POSIX as an example, was rubberstamping the lowest common denominator of all relevante operating systems, rather than thinking about what an operating system for the future should look like and be able to do.

    This is also why the C programming language still does not know what a linked list is, and still uses the ASCIIZ string format you complain about. (ASCIIZ was not wrong at the time, but it should have been upgraded around 1985 when VM systems emerged, but backwards compatibility prevented that from happening.)

    "Computer Science"

    Somehow the academic discipline of Computer Science became abstract math with a disdain for things which actually runs on hardware, typically exemplified by Peter Naurs much quoted "Real Computer Scientists writes in nothing less portable than pencil #2"

    Today CS grads still get taught how all the classical algorithms perform on a ZX81, but not how they perform on a PC with a virtual memory operating system, N levels of caches, write buffers and all the other wonders of modern hardware. (See also: http://cacm.acm.org/magazines/2010/7/95061)

    Poul-Henning

    55:

    "On the other hand, at least ESR has apologised for sendmail."

    Why on earth would he do that ? He didn't write it...

    Are you confusing Eric S. Raymond with Eric Allman ?

    To my knowledge, Eric Allman, CSO of Sendmail Inc, has not apologized...

    Poul-Henning

    56:

    This is where we went 'wrong':

    'Complicated systems produce unexpected outcomes. The total behavior of large systems cannot be predicted.' - The Systems Bible

    A lot of great examples of bad security decisions have been cited in the original post and in the comments. But we're still getting it wrong, even we put our minds to it. Even the best efforts of software vendors results in small, linear improvements in security. The problem is that our technology has evolved to a point of sufficient complexity that it displays unexpected behavior -- and this unexpected behavior can be exploited by other people.

    After working in computer security for a decade, I long ago gave up hope for a silver bullet to solve everything (or even anything). I now believe that having crossed this complexity threshold, we should expect every system we design to have these flaws, and a bloody battle to patch and fix them where we can.

    P.S. Human beings are complex systems too.

    57:

    It has been mentioned in this discussion that most hacking is the result of sosial engineering. So what is the main architectural flaws of the brain that allow this?

    That's a key insight, I think.

    These cognitive biases are a good start, but I'm not convinced they're the whole story.

    Anyone got any pointers to the effects of cognitive bias on the design of complex systems?

    58:

    Ross Anderson has argued, IMO fairly convincingly, that the fundamental problem is economic. If you take the extra time to build a secure system, the guy that built the insecure system and released it first will have cornered the market.

    Additionally, there are regulatory issues. Anderson claims that the introduction of chip-and-pin in the UK was primarily led by the banks' desire to make the customer responsible for losses (if the technology is "secure", any disputed withdrawal can't be the bank's fault, can it?). This coincided with the supposed new crime of "identity theft" - again, removing the responsibility from the bank to the customer.

    At a more fundamental level, I agree with those previous commentators who have mentioned social issues. Scammers and con-artists exploit our need to trust each other. If we couldn't trust anyone, we would be a society of paranoid people, and that would not be a pleasant society. The problem in an online world is knowing who to trust, when most people don't have the necessary skills to make a decent judgement.

    That sounds like I'm putting the blame on the "user". Actually, I don't agree with that; we need decent, secure, usable systems that remove much of the opportunities for scammers and that help users to make informed judgements. But that's hard and goes against the economic flow.

    59:

    The whole strategy of incremental upgrades to telecommunications systems and never replacing anything, just paving it over with another layer leads to systems that can never be secure, and can never be anything but obscure. This might or might not affect the ease of fraud per se, but it gives me an excuse to vent:

    The ADSL provisioning system in the US (SE / BellSouth - but all regions are similar) requires a flowchart that is about 8 sq. ft. of 1 sq. in. boxes labeled in flyspeck-3 with enough arrows and lines all over the place to plumb an oil refinery. Each of those boxes represents a computer program running on multiple mainframes, in use over 20 years on average, in some cases up to 40 years, which was never properly documented, for which the original developers are not identifiable, let alone available, which does something vital and many other things that no one remembers. To actually do anything a hacker would generally need to know how to use several of those (punishment in advance!) and get passwords and appear to be acting from the right network, so the phone company assumes that is secure enough. So they don't secure the few new, easy-to-use systems that can actually be used to take down the network entirely, such as the management system for the ATM switches. A dim 13 year-old could take Florida off the net in a few minutes with NavisCore, but for several years they left the default password on it ("Cascade").

    On the the other hand, the main wiring database (COSMOS/FOMS, now 40 years old) had seven different regions each with a separate individual user account password which changed at various phases of the lunar month, depending on the region and the user's horoscope, only resettable by one old lady with weak kidneys in Tennessee. The documentation was only available to the Legion of Doom, but thy helpfully posted some of it on Phrack. It really should have been called the Cable Telephone Heuristic User Lookup Utility. I always got a very Laundry vibe when using it; I almost suspect that it may have had something in it, some ancient security measures found in a high desert valley of Antarctica with tentacles and too many...

    60:

    Hey Charlie,

    Maybe add 'homoglyphs' to your list?

    greetings,

    Jacques

    61:

    To the best of my knowledge, ESR has never apologised for anything. He certainly hasn't apologised for fetchmail (the mail transport program he did write).

    62:

    The split between Harvard and Von Neumann architectures is a one and many divide -- if you want a program to do one thing in its life and nothing else (a microwave oven controller, say) you can use Harvard architecture as it will be written as firmware and run in ROM and never be changed. You know how much ROM for code and RAM for data it will ever need and you build that amount and no more into the hardware.

    If you want a general-purpose computer you really have to use a Von Neumann architecture where different programs can be loaded into a writeable memory space at different times and then executed. This allows for small progams that use large amounts of data and large programs that use small amounts of data to run on the same hardware, something that's not possible on Harvard architecture systems.

    63:

    I think that 2) and 3) are actually the usage of the wrong thing in a wrong way, by themselves C and TCP/IP are wonderful.

    Strings in C is something that should be written a) by CS students, to learn how nasty it is, and b) by experts in specific parts of the code that need to be optimized, using the proper apparatus (like a finite-state machine, see for example the parser in OpenSIPS). It's a pain, and not supposed to be done without a lot of careful thinking.

    In TCP/IP also the point is that IP gives you unreliable transport between two points (they say "best effort", but it amounts to the same thing). It's simple and working - nothing else could scale to the number of end-points and traffic that the internet has reached. Even the phone network (including all ISDN and GSM crap) is at least an order of magnitude smaller and probably has two or three orders of magnitude less traffic. Here security is part of the upper layers, where it actually can be done right (because one IP address doesn't identify a person, or even just one machine), it needs to be done really close to the application, because there you know what's happening. SSL (even though it's crap) is far more suited for online banking that IPSec (and far easier to deploy, anyway). (also, seeing the progress of crypto in the last 20 years, I would say that any crypto-secure protocol 30 years ago would be a joke now).

    64:

    re no.22

    Charging for email will do sod all to stop spam, because most is sent by botnets running on infected PCs. About all this will do is penalise the people who let their computers get infected.

    65:

    The lack of builtin digital signature validation of most data, especially executable data, is a major architectural problem with negative security and performance consequences. Van Jacobson delves into this in his 2006 Google Tech Talk, A New Way to look at Networking, in which he describes the problem with a historical perspective and argues for a content-centric network architecture, which he's researching at Xerox PARC.

    66:

    John @ 35 / Charlie @45 - sorry, to clarify - I know I am sent far more spam, the numbers I quote are what I actually see post filtering. I have to say I rather thought that spam filtering was on top of the problem from an end-user perspective. I used to get a lot more ten years ago. Maybe I'm unusual.

    Clearly for corporate/professional users there is still a cost in doing the filtering - I'd be interested to know how the reduced cost of postage/post rooms compares to the cost of supplying and maintaining email infrastructure - but for those using gmail/hotmail/ etc for personal use it's effectively free (at the point of use, at least). How much spam do you manually need to wade through post gmail Charlie?

    Marcus @ 64 - was about to say just that. This also applies to emails needing to be from an authenticated source. It's like a lot of things - you just shift to the next vulnerability along (ID cards are a good example of this - if they are effectively impossible to forge you just corrupt the process of being issued with them - let the state make genuine ID cards on the back of a co-opted application process).

    I'm not sure we see all this fraud because of vulnerabilities in architecture ultimately but because anything involving large-scale interaction between people will see varieties of scams/frauds/free-riding. The architecture vulnerabilities are just the proximate cause. We are seeing this with increased social-engineering frauds as some of the more obvious vulnerabilities are plugged.

    67:
    • Computers advanced faster than networking. Hence, instead of video-phones, the computer was dragged kicking and screaming from its native habitat and onto the desktop. (Not that video-phones and centralization is a wonderful scenario, but it might have been more secure.) This perhaps also skewed us towards fast serial processors pretending to do multiple things at once (with potential for malicious interference between tasks), rather than something more like a cluster.

    • The model of computation that became popular was Turing's, rather than the lambda calculus. Mutable data! State! Imagine if early data storage had been more like optical disks: Write-Once-Read-Many, only allowing bulk wiping rather than small rewrites.

    Imagine if instead of C we had used something like Erlang for systems programming.

    68:

    Personally, in the USA I blame the Senate. We don't actually have adequate internet law, civil or criminal, and enough police enforcing what laws we do have. The biggest crime on the internet is spamming. Now, in the USA, we had some pretty good law enforcement against spammers...until the spammers got to the Congress, wrote a weak national law, and made it to override state law. There are no-way enough cops on the identity-theft beat--Seattle has no police who work on identity theft. Zero, none, nada, zip. If your ATM card is skimmed in Seattle (and the cards themselves are less secure than European cards) the police will take a report and send you to your bank, not look for the skimmer. There is one--one!--police detective in the north Puget Sound region (in Everett) who takes an interest. Just another one of those functions that the free market provides, I guess. Another sort of crime, common enough, is market fraud, both large and small. We have no national (let alone international) small claims court where actions can be brought to resolve it, so disputes pile up, and one has to rely on the tender mercies of EBay's arbitration system in most disputes. Etc., etc., etc.

    You can't have law if no-one writes the laws, or writes inadequate laws, doesn't provide adequate courts to litigate them, and no-one enforces the laws there are, anyway.

    BTW, the Harvard/von Neumann architecture issue is a read herring. Modern computers have separated instruction and data spaces, have for a long time, and it just doesn't help.

    Croak!

    69:
    The big problem is directories -- the mapping from "Charlie's Diary" to "http://www.antipope.org/charlie/blog-static/index.html". Again, I don't have a solution for this...

    If I type "Charlie's Diary" in my firefox address bar I get sent to http://www.antipope.org/charlie/blog-static/index.html FYI

    70:

    As far as a non-fascist secure OS goes, I think Qubes (http://qubes-os.org/Home.html) is the right idea. Do your banking in a separate OS instance to your web browsing.

    Everitt@11: Sadly, the Amiga had loads of viruses. They survived over reboots by infecting executables and floppy dist 'boot sectors'. The popular machines uses the original 68000 cpu which provided no memory protection hardware, and were therefore impossible to secure.

    71:

    (Also Alex@70)

    The one thing that a sensibly designed ROM-based OS has going for it is that the user can always get the machine back to a working and temporarily virus-free state. (I don't know how/if this worked on the Amiga; Acorn's RISC OS had OS on ROM and updates on disk - if something went wrong, there was a hard-wired keypress on bootup that loaded only the ROM OS, and prevented the system from touching any writable media until the user gave explicit instructions to access them.)

    So long as the OS in ROM is competent to allow you to collect/activate a current anti-virus tool in some way, you can then activate your data storage (or insert your floppies) and clean the problems away. Whereas a system based on writeable media can't do that, because the virus can arrange to take control of Windows' "safe-mode" or local equivalent. (Equally, if your anti-virus tool isn't self-contained, then you're stuffed; the designer of the tool is a complete idiot, of course, but knowing that doesn't solve your problems.)

    "Live" bootable CD/DVD versions of Linux serve something of the same role, these days, IMO: if something goes wrong, the CD can do virus-proof maintainance/recovery.

    72:

    I'll put in another vote for lack of authentication of email. The fact that you have no method at all of establishing who actually sent a message is vile, and one of these days has to go away completely.

    It would not eliminate spam, but it should to a large degree remove phishing attacks through email, and the "Please Send your Password" crap would be greatly reduced.

    My #2 is the credit card companies' policies: they take no responsibility for fraud, tossiing it all at the vendors. Schemes to weld CC authentication to websites have failed mainly due to exhorbitant fees and ridiculous testing processes. It is still far too easy to use someone else's card online -- authenticated email would be one help for this; being able to report fraud as a vendor is another (we sold children's books, and when a customer wants to use a US credit card to ship a large volume of expensive, adult-friendly goods such as Harry Potter or dictionaries to an address in Romania or Indonesia, we'd just throw it away). We'd call up Visa/MC, they'd say the card is valid, and not care that the transaction is clearly fraudulent. There should be a bounty paid for those, to offset the chargebacks when you don't catch them.

    73:

    Hi Charlie,

    Two pointers on cognitive bias, again from the biologist in the room:

  • We prefer rational systems. Contrast this with evolution, who is the worst spaghetti programmer in the known universe. The problem is rational programs are more hackable. After all, if the code is optimized to be understood by humans, it's therefore optimized to be hacked by humans.
  • Of course, the problem with instituting Darwinian algorithms is that we may well have problems understanding how the damn things work (for instance, the Darwinian algorithm may use an intermittent short in the machine it's running on as the basis for its random number generator). This can lead to cargo cult-style programming (I don't know why this works, I just do it 'cuz the machine tells me it works), and that is arguably a worse problem than rational programming.

  • Another cognitive bias is the same problem environmentalists face. Call it the seven generations problem. Supposedly, the Cherokee considered the effects of a major decision on seven generations (forward and backward) before implementing it. It's just a romantic term for abrogation of responsibility to care about anyone else.
  • This is what Dave (#51) referred to as the forwards and backwards compatibility problem. And it's a thinking that's been notably absent from the computer world, except where people have been turned into drooling zombies either by the prospect of "new, shiny, cool" or the prospect of obscene profits that will allow them to rule the world. Or both.

    As a result, much of programming and the net has evolved in an ad hoc way, following greed, funding, and small-group social pressures more than anything else (remember, the net was brought to you by the lowest bidder).

    Anyway, it's not a problem that's unique to the computer world. Witness exhibit #1: the environment we live in.

    74:

    I'd like to add a couple of revisions to this:

    (1) I note the irony that using this reply form requires using unsecured JavaScript in HTML.

    (2) TCP/IP's real problem was that its designers were ignorant of a thousand years of cryptography. The unstated assumption of TCP/IP's system is that every intermediary between A and B was part of the same, trusted system with strong access control — that is, that anyone intercepting a message sent using TCP/IP would him/her/itself be an otherwise-trusted user. This is a special case (or, perhaps, the general case!) of the "security is someone else's problem" meme.

    Remember, TCP/IP was designed for a closed loop of communications within and between labs sponsored by DARPA and ARPA. It was thought that the difficulty of physically accomplishing an intercept was all the security that would be necessary... particularly since there was a big "Do Not Use This System for Classified Data" sticker on all of the early TCP/IP terminals, meaning that the only people who had even an inkling of security weren't using the system!

    (3) C and its descendants are, themselves, a disease. May Kernigan and Ritchie burn in hell forever, for failing to learn something from one of their predecessors at Bell Labs (Claude Shannon): The concept of the known-plaintext attack, and its implications for data integrity. And that's the limit of what I can say on that sub-subject.

    This is, in a sense, a coordinate problem with the Von Neumann v. Harvard architecture debate: It's not just that data and programs share memory space under Von Neumann architecture, it's that every tool that one is expected to use for manipulating data is equally adept at manipulating programs — and vice versa. It's pretty obvious that there were no wood- or metalworkers involved in designing C or 1960s/1970s computer architecture... as they would have screamed (very, very loudly) about that!

    (4) Don't entirely blame the US for the SSN problem; other nations have similar issues, they're just not through the SSN or quite so pervasive in scope. This arises from three factors more common in the US than elsewhere: * The considerable degree of overloaded identifiers (names) for persons not related by clan/tribe/other clear grouping who do not know each other, combined with the much-greater physical mobility in the US (having commanded a military unit with three Michael D. Kennedys in it at the same time — one Boston-Irish-to-the-core, one great-grandchild of a South Carolina slave family, and one Asian adoptee from Oregon just drove this home); * For administrative convenience the linking of disambiguated, non-secure recordsets with disambiguated, secure recordsets (and, worse yet, disambiguated, secure, compartmentalized recordsets, such as financial and medical records); and * Political-security-appartus paranoia that encouraged all of the above... such as the "need" for the FBI to keep tabs on all them subversive students in the 1950s and on, and ensure that the right inflammatory, unsupported allegations ended up in the right useless files.

    In short, it's social pressure evolving from the misuse of security systems.

    76:

    "Strings in C [...]by experts in specific parts of the code that need to be optimized,"

    C strings are not optimal on any wider than 8 bit virtual memory hardware.

    For instance the strcpy() function has to move the string one character at a time, because there is no way of knowing if the character past the terminating NUL is accessible, it might just happen to be on the next non-mapped VM page.

    Representing strings by two pointers, start&end is generally faster, often much faster, than terminating them with NUL on newer than ZX81 hardware.

    Poul-Henning

    77:

    I agree with a combination of Dave@58 and Agduk@56. It's not that we got it "wrong" it's a combination of economic driving priorities other than security, sociological issues and the fact that the system is so complicated that security is prohibitively expensive if not impossible.

    To make an extreme analogy, did Henry Ford get it wrong when he built cars without airbags, anti-lock breaks and traction control? Today that would be unacceptable but back then they weren't even conceived. Like with early cars we're still learning what the Internet needs for safety, how to implement it in a manner that makes economic sense and then how to get people to "do the right thing" either via usability, economics or laws.

    To look at it another way, if we had designed the Internet to be much more secure I suspect it would still only be in us in some combination of military, banks and the like instead of benefiting the broader public. Or more likely, something else would have beaten it to market.

    Yes, we spend a lot on the Internet security problems, but it's a small fraction of what it enables.

    78:

    Charlie,

    That's very true: the SSN is a specifically American problem, which overlaps with our lack of a privacy law and our love of computers to make the id theft problem worse.

    What makes it bad is

    (1) it is used to link records even if the rest of the record doesn't agree. For example, if two records have different name, address, dob, and the same SSN, they will often be linked.

    (2) reporting agencies collect and link records which are sold to businesses and government in ways which impact people's lives. Everyone in the system is motivated to make the individual look as bad as they reasonably can. So its hard to expunge bad data.

    (3) This is where (what I see as) the real ID theft comes from: the data rolls around an opaque set of databases and comes back to haunt people years later. This is where the real ID theft comes in-its the sense that one's reputation has been stolen and tainted. There's a good report from the ID theft resource center called "aftermath" that covers this in some detail.

    I don't know that other countries tend to have enough modern controlling databases loosely linked without privacy controls to produce 3. So in some sense, fraud by impersonation is now universal, but the lingering and ongoing effects may be somewhat unique to the US.

    79:

    I can add a couple of architectural issues:

    Routers implicitly trusting their peers for purposes of propagating routing table information. So far we've mostly seen accidents as a result from this (eg. http://arstechnica.com/old/content/2008/02/insecure-routing-redirects-youtube-to-pakistan.ars), but the potential for deliberate misuse ought to be obvious.

    Another problem is the proliferation of 'Trusted Certificate Authorities' that are anything but trustworthy, undermining the (admittedly limited) security of SSL: http://www.eff.org/deeplinks/2010/08/open-letter-verizon

    80:

    There are number of errors in your exposition. (1) Execute permission has been a feature of many memory management architectures for a very long time. The issue is getting it used. A more general problem was taking an architecture (the Intel 4004) originally intended for a desktop calculator and using it for a general-purpose computer. It would have been nice if they had started with something designed for general-purpose computing. (2) String handling in C is not an issue for other languages, even if they use C or C++ as an intermediary. Moreover buffer overflow problems can occur even without null-terminated strings. These problems could be avoided by having languages with mandatory array bounds checking. This is very difficult to do in C/C++, and it is the real problem you should have identified. Fundamentally C is not a high-level language; it is high-level assembler. (3) IPSEC has existed for a very long time. It is not much used. More importantly, people still use cleartext passwords on the Internet, which is horrible. We should all be using something like Kerberos. Once again, the technology has existed for a long time, but it is not used. (4) The more general symptom is that programmers tend to add features without thinking about security. Security thinking requires training and perhaps the right sort of personality. The real symptom is allowing anyone to define protocols without security review. (5) What makes you think the iPad is secure? This is a silly assumption (look at the recent PDF exploit). Give up on user education; it is hopeless. If programmers don't solve the problem independent of user education, the problem will never be solved. (6a) You might want to mention software piracy here. Microsoft system are insecure. Unpatched Microsoft systems are worse then insecure, they are screaming out "come get me". It is easier to get Microsoft bits than it is to get Microsoft patches. Hence there are lot of worse than insecure systems out there. If people ran Linux, it would be possible to have everyone fully patched, reducing spambot nets and the like. (6b) Another part of the problem is that Microsoft only writes a fraction of the OS. Third parties provide lots of device drivers, and other insecure software that gets bundled with their code. This makes it more difficult to secure.

    81:

    There is one general rule that explains what happened to us. I originally heard it in the context of floating point, but it is clearly very widely applicable, even beyond computer history. That rule is, "the fast pushes out the good". Here fast could be execution speed, or time to code, or time to market/revenue.

    Until we find a structure that encourages the good to push out the fast, we are lost. Of course, our civilization really will be toast in a few decades anyway, so the question is how to reach beyond the coming Dark Ages to influence those that rebuild from our ashes.

    Interestingly, the proponent of the original rule, Professor William Kahan, of UCB, eventually succeeded in pushing IEEE 754 upon the world, which seems like a counterexample to his own rule. Progress is possible, but at the cost of great effort.

    82:

    And just like monoculture agriculture is vulnerable to a successful disease, so is monoculture in our computing systems.

    Biology uses [mostly] the same genetic code, so it is the changed mix of proteins and their interactions that keeps the parasites at bay. What is the equivalent in computing?

    83:

    If we have to pick on deep structural problems that led to the kind of second-order and third-order (and higher) effects that allow this sort of widespread, large-scale fraud, we could point out that we chose the wrong side to approach modern computers from entirely.

    Forget von Neumann -v- Harvard, because odds are high that those low-level differences would be simply hidden away by the compilers anyway - C on a PIC chip looks like C on almost any other chip in the application level of code (granted the system-level stuff looks funny, but that's true of all code running at that level).

    The problem stems from the approach of making personal computers by taking what were in effect games consoles like the Atari 400 and beefing them up to become Apple ]['s and IBM PCs and so forth; instead of taking minicomputers and further scaling them (and their operating systems and already fairly well developed philosophy of not trusting users) down to the personal level.

    There were efforts made to do that, but mostly the work was done by terminal manufacturers like Wyse who thought you could have central computers and remote terminals; and the networks in place were too slow to allow that. Being able to buy an Apple ][ with a spreadsheet in ROM and get to work today was chosen over taking time to provision a system securely and correctly.

    And that whole mental approach of Now Now Now! has permeated the entire industry ever since. Charlie's own diary of how much fun he was having as he was writing perl code to talk to banks points it out - our entire software industry, at a very very deep level, has a cultural bias towards doing things faster and faster and fixing the bugs in design as we go, and even the few who advocate slowing down and thinking aren't really screaming to stop this, they're more of the opinion that you can do that successfully, if you just adopt the right methodology, whether that be Agile this or Test-Driven that. We talk about how to avoid paralysis-by-analysis and fail to notice that we went right past avoiding it and straight into not analysing anything at all, at least on any kind of long-term basis. The loudest voices in the software industry still advocate shipping stuff as the gold standard for software, to the point where shipping crap is often seen as better than holding off and shipping gold next quarter.

    When that's the mindset, security holes are the price of business.

    But maybe if we'd gone with the approach of shrinking mainframes instead of expanding calculators, we wouldn't have this issue...

    ...we'd have something else instead :D Criminals fill niches, if you close off one, they'll just find another...

    84:

    A slightly similar discussion (or digression, or reminiscence) has been going on on the UK-CRYPTO discussion list.

    Nobody has mentioned a Memex yet though.

    The corporate thought "everyone else uses Microsoft, so we have to" is also a problem.

    85:

    Meh. This is hindsight bias. Some how we knew all along how to do this and if we had just taken the time to think about it before we started, all this could have been avoided. Balderdash!

    I am not convinced this could have gone any other way. Yes, MS deserves a huge amount of blame for specifically blocking the adaption of certain security features that interfered with what they perceived to be their biz interests.

    But the web is a social construct and that means you have crime. That is why cyber crime was not a huge problem until we got the web. The crimes would be different in structure if all the of Charles mistakes had not happened but we would still have crime on the web.

    Its the wet-ware damn it!

    86:

    Well, Microsoft actually had it more correct at first. Windows NT 3.51 was a modified microkernel with the video and printing implemented in user space. They pulled that back into the kernel with 4.0 because people wanted it fast, not stable or secure.

    I think lots of people know how to architect stable and secure systems. As others have pointed out, fast and cheap often win in the market. The customers are probably as much or more to blame than the software makers.

    87:

    Just wondering--how do you know this "theft" is a problem, macroeconomically speaking? Money moves around, but identifying where it "should" be is fairly difficult, and attempts to do so tend to end badly.

    Not that I'm suggesting theft is necessarily a great way to spend one's time...but for all I know its existence is doing us all some favors.

    Not the sort of area where I'm likely to form an opinion, but I do wonder why commenters above seem to be assuming there's an actual large-scale "we should all do something differently" sort of problem?

    Also, I find it amusing that with JavaScript blocked I get an error that says my submissions are an "invalid request"--using firefox 3.6.3 on Ubuntu 10.04

    88:

    There are huge numbers of application protocols (ARP, and DNS most crucially) that assumed they were on trustworthy networks. Just saying 'add encryption to TCP/IP' is foolish; you've got to engineer trust into the system and simply adding encryption doesn't help with that. It's a n00b security mistake to assume that encryption would have made a whit of difference (we'd just be dealing with transitive-trust-based attacks instead of attacks on in the clear applications)

    The main mistake we made with the internet's design is not making crucial pieces of software have a hard, built-in "expire-by" date after which they'd stop working. We've still got FTP, ffs! As Ray Wylie Hubbard says: the most important thing about song-writing is to ask yourself "do I still want to be playing this in 25 years?"

    I did a rant about this issue for TEDx in Baltimore: http://www.youtube.com/watch?v=o59mQhBiUo4

    89:

    The problem is the users (I say "users" rather than owners because most computers are in office environments where the company owns the computers but the people using them effectively control what hardware and software they buy).

    When most office computing was done on mainframes, viruses and other computer crime mostly weren't possible. This was partly due to a lack of connections to the outside world (the Internet was limited mostly to certain government and academic uses until about 1988), but the main reason was that mainframes and their operating systems were designed to protect against (for example) one process clobbering the memory space that another was using. The latest mainframe systems -- in particular VAX/VMS, in which I specialized -- were bulletproof; you couldn't possibly infect them with a worm or virus, and we had one machine that stayed up for 8 years without a reboot. (It did not use Harvard architecture, but with a well written secure kernel, you don't need it.)

    Then the desktop-computer salespeople came in, and showed everybody cute-looking "toy" operating systems like the Macintosh (this was years before IBM made a PC). In theory these machines were more efficient (measured in raw Hz or FLOPS) than mainframes, but that's partly because they weren't counting a lot of the operating system "overhead" and partly because they decided a lot of that overhead -- in particular the kinds of protection found in VAX/VMS -- wasn't necessary, and took it out.

    No engineer would have even considered switching, because even the largest Mac could simply not have held the 200,000 line FORTRAN programs we worked on. But the people in the administrative offices were all entranced by the Mac's cuteness and its bells and whistles, so just about everything we bought from that point on was a Mac, or something similar to a Mac in all the important ways.

    The PC is no different in concept, but the PC extended this bad marketing situation when it opened the software business to competition. The competition between (for instance) MS-DOS and DR DOS in the early 90s showed the pattern: Digital Research took the time to debug DR DOS before shipping, while Microsoft rushed the new version of MS-DOS out the door with new features but the bugs still in -- and everybody bought MS-DOS, and there have been no more versions of DR DOS. Thus the love for bells and whistles has effectively driven reliability out of the market.

    Everything you've complained about in this article is a direct result of this marketing situation.

    The software vendors (or at least the major ones) are perfectly capable of producing reliable, un-buggy software. They will start doing it when consumers start making their buying decisions based on reliability rather than bells and whistles in large enough numbers to make it pay to produce reliability.

    90:

    Passwords. There have been cheap devices for two-factor authentication for decades. There is no standard, and big money moving operations like banks mostly don't use them. For ordinary humans, passwords fail. They are either memorable and crackable (and often shared across sites, one of whom will get a database hack), or a heap of forced noise that ends up written on a post-it.

    SQL. If database APIs were binary, you couldn't inject them from a badly escaped text box. For some reason, the world has standardized on a textual language instead. (the same language has resulted in programmer-millennia of wasted effort bridging its resolutely textual nature to various programming paradigms).

    91:

    On the matter of "still using passwords".

    My servers use public-key authentication for important stuff. My servers, my rules.

    So I'm hosting some web piles for a young lady friend, and she uses Dreamweaver. Dreamweaver supports "Secure FTP (SFTP)" which turns out to be the ssh-based sftp as opposed to FTP-over-SSL or something like that. What Dreamweaver does not support is anything other than password authentication.

    That's all right, I'm comfortable with setting up an sshd permitting password authentication bound to 127.0.0.1:22, and set her up with PuTTY and a session that opens a tunnel to that sshd, then her Dreamweaver SFTP client can talk to that. In this way, the password-verification service is kept isolated from the Internet, and the password doesn't go across the Internet in the clear. That's the theory.

    In practice, the PuTTY SFTP client works fine with this arrangement. Dreamweaver's doesn't, and its error dialog text could as well be "waah, it doesn't work, sulk" for all the direction it gives about what went wrong. I can look at the server logs and verify that it's not getting as far as trying to authenticate to the sshd listening on 127.0.0.1.

    Google can find me at least one blogger asserting that FTP over an ssh tunnel works fine for this. No, he's dreaming, it doesn't work, and it would have come as more of a surprise to me if it had worked, because I know how FTP works. You'd need one of those really old FTP servers that defaults to listening on 20/tcp for the data connection, and for Dreamweaver's FTP client to send neither PORT nor PASV commands, so you'd know a port number to tell ssh/PuTTY to tunnel.

    To her credit, the young lady friend does understand why I don't just let her use FTP. But I can understand that it would be much easier to get it working with Dreamweaver if I'd just open up FTP with all its passwords-in-the-clear and password-verification-service goodness. And that your typical US$10/month-and-under web/ASP shared hosting provider does just this because at those prices they don't have oomph to keep telling the customers "no".

    Add on top of this all the web browsers and other applications (including Dreamweaver) that know password authentication is how it's done, and help the user by remembering the password. There's a lot of installed base that needs rework to get past the use of passwords. Then there's what I would call the "password mindset", the people who think that's how it is done and just won't grasp one-time passwords or public-key authentication.

    92:

    Architectural issues that have resulted in security flaws.

    Multi-threading. Lack of bounds checking. Executable stacks. Avoiding handling error conditions.

    Of course, all of this comes about because of a lack of CPU power. Being secure is expensive. That's why C didn't have bounds checking. There are languages from that period that did (Pascal, Ada, etc), but they weren't as quick. It's only as we get faster and faster CPUs that we are able to spend cycles on security.

    When I learned Ada, we had a choice of compilers - 20min batch compilation that was "correct", or 5min interactive compilation which was fast. Compare that to immediate. interactive C compilation. :)

    It is like the Y2K problem. All those extra bytes? Expensive.

    93:

    JDG: Your account doesn't match my memory of the evolution of the personal computing market.

    94:

    One thing that I'm not sure I've seen mentioned yet:

    Almost all current OSes adopt a security model which was inherited from early timesharing systems, which assumed that any program that a user was running was acting as a proxy for the user, so of course it makes sense to give it access to all that user's data. Which made perfect sense in an environment where the users were researchers, and the programs were things that they'd written themselves, or "trusted" system utilities.

    Fast-forward to today, when users are frequently running code that they've downloaded off the net, from sources that can't necessarily be trusted in any way whatever --- and the security model on the desktop hasn't changed a whit. Why should the operating system even permit the sudoku puzzle that you downloaded from who-knows-where to open the file containing your email?

    To my way of thinking, this is where the mobile OSes are doing potentially interesting stuff, in terms of walling one app's data off from the others (Android, at least, gives 'em separate uids), and trying to limit and control the ways in which they share data, and access sensitive stuff. (BTW, I have had the experience of "this sudoku app wants access to my contacts? How about... NO.") Ivan Krstic's security work for the OLPC project is also interesting in this regard, though I'm not sure how much of that showed up in the delivered product.

    At any rate, I think this sort of thing is likely to be more important, over the long haul, than the iOS app store review process. As I think others have already pointed out, malware has already made it in there already. The known examples have been pulled as soon as they were detected --- but it's impossible to be sure that some popular app doesn't have nasty surprises lying in wait.

    (And with either Android or iOS, there's still also the possibility of just plain bugs in privileged utility code leading to nastiness --- witness the recent "jailbreakme" hole that let a rigged PDF file take total control of an iPhone, to the point of overwriting the OS...)

    95:

    DNS security is an interesting case.

    DNSSEC-bis, as formulated, effectively promotes all of the nodes in the domain-name hierarchy -- everything from '.' (the root node) to the lowliest subdomain, at least potentially -- into certificate authorities.

    This is, one hand, very neat: it effectively reduces the economic cost of getting your public key(s) certified as belonging to named Internet hosts and services: rather than needing to take proof of your Internet domain registration to a separate CA, you could instead automatically receive a certified copy of your public key when you register a domain.

    (One of the reasons I suspect it's taken a while for DNSSEC to be deployed is because Verisign have had -- until recently -- a strong economic interest in holding things up.

    Y'see, as well as selling SSL certificates, they also operate the DNS root, '.com' and '.org' TLDs. Signing the DNS root could have invalidated their own core market overnight.

    So, before doing that, they first introduced Extended Validation certificates, which provide more capabilities than those you'd be able to obtain via DNSSEC, and then subsequently spun off their SSL certificate business to Symantec for a billion dollars. A short time after that, they signed the DNS root records.)

    However, the root DNS node then becomes not only the root of the public Internet namespace, but also the root key for certifying all public Internet operations.

    ... who would you trust with that key?

    96:

    Charlie Dodgson, you'll find various PhD theses written in the 1970s (e.g. at the MIT Laboratory for Computer Science) recognizing the problem and proposing fixes. I was one engineer working on an operating system around 1980 called Amber that was going to implement some of these ideas. Unfortunately shortly thereafter the idea of writing new operating systems began to whither, leaving us with Unix and Windows and MacOS. Oh well. The fast pushes out the good.

    97:

    David McBride, fortunately there is https://dlv.isc.org/

    98:

    Nestor@69: Yes, but only because you've BEEN THERE and Firefox is raiding your History and Bookmarks to find it. If type that into a fresh install, I don't necessarily get here. If I type that into Opera or IE or Lynx, I don't get here.

    "Charlie's Diary" it not a universal identity. And there are a number of people who can legitimately claim that name, worldwide - and a good number of people who might have a better claim to the name than Our Beloved Host, if, say, a million people wanted to know what some other guy named Charlie had to say.

    And if all those people need to know that "Charlie's Diary" isn't THEIR Charlie - well, isn't that EXACTLY what we've just been saying about directories versus URLs?

    99:

    Johnny99@66: The problem with your position is, fundamentally, this:

    I have to say I rather thought that spam filtering was on top of the problem from an end-user perspective.

    From an end-user perspective, yes, people are on top of the spam problem.

    But that is not because spam is not costly, or that your costs aren't greatly inflated because of spam. You've simply got a false impression of what internet access really costs, or would really cost if not for the multi-billion-dollar losses due to spam.

    Once again, I reach for an analogy:

    If giant worms ate any car that left a driveway, this would be a catastrophe.

    Currently, we pay immense amounts of money for armies to shoot at the giant worms when they reach the roads and to distract the giant worms so they can't find the roads. And, as a consequence, you only need to OCCASIONALLY route around a giant worm that most often only occupies half a road and there's already pylons and detour signs.

    And your "tax dollars" pay for the Giant Worm Patrol without you ever being hit with a personal Giant Worm Bill.

    The fact that you, end user, are still able to use email at all? Is because of TRILLIONS of dollars of work to thwart spammers, and BILLIONS PER YEAR ongoing. If spam stopped suddenly and permanently[1], costs for internet access and internet-only services would drop by 95%, or more.

    Which is not to say that PRICES would drop by that much - I'm not that naive - but if the spam problem had never existed, prices wouldn't be this high in the first place.

    [1]: For example, after the implementation of RCF 4492, aka "shot in the head five times for spamming, over IP"

    100:
    From an end-user perspective, yes, people are on top of the spam problem.

    hahahahahahahahahahahahahahahahahahahaha

    Wait, are you serious?

    End-users still consider spam a problem. They are only ignorant of the true size of the problem.

    101:

    Have you never read the C or UNIX I/O APIs? The UNIX read() call specifies a maximum length. Stdio will do the same.

    If you're thinking of gets() (ugh! deprecated!) you're way down the wrong track. - Cameron

    102:

    gets is required for standards conformance; it is strongly recommended you not use it.

    It is far from the only function in common usage that doesn't take a length parameter -- strcpy and strlen to name two. The fact that there are safe versions doesn't change the fact that the unsafe versions existed, and were used, for decades. And, again, they are due to a failing in C, in that there is no bounds checking (in most implementations, I should say), and so even the "safe" variants require the programmer to be right.

    This is, as Charlie asks, where we went wrong. The fact that there were valid reasons for each of these decisions at the time doesn't mean they don't qualify under Charlie's topic.

    103: Regarding the concept of 'fast' versus 'secure' I recall the 'holy' wars of microkernel versus monolithic in the BSD vs Linux world. In this war the monolithic adherents ended up with the widest distribution (i.e. Linux), perhaps because of all the performance benchmarks showing the hit you take transferring data in-and-out of secure kernel memory.. But how much of this stemmed from performance reasons that no longer hold true? Has our hardware improved to the point where we will not miss the 10-40% performance hit taken to implement better security models? If my OS product is harder to hack yet runs SPEC2010 half as fast as my competitor will anyone buy my product?

    Marketing once had us chasing mhz and now wants us to chase core count, yet how many cores can a non-HPC application utilize?
    Perhaps that is the reason Intel spent the billions buying McAfee... dedicate some cores or 20% of the hardware to security. Market the next Intel product campaign as 'dedicated hardware monitoring for security issues!' only in the new Intel CoreI9!

    Its troubling when I run across companies running Linux servers that have linux clients generating HPC data (think seismic, financial, engineering models, etc..) yet think they need to install a virus scanner for the Linux server. They complain about the performance hit, yet insist they need the scanner on the server... At what point do you accept the performance hit -- at the far end-user? Up front when-ever data is written no matter what the source? Many have apparently decided all data storage gets scanned... no exceptions. In this model who wins in the end... you with your slow data access or your competitor who might have some extra viral traffic but operates 40% faster (and perhaps many times more profitably).

    On my dual boot laptop (Vista and Linux) at work I choose the Linux boot if I have real work to accomplish. The Vista boot takes 2x longer, each website takes 2x longer (gotta be McAfee scanned!), 40% more cpu burned scanning for in-memory risks, etc... My 2009 dual-core laptop becomes slower than my 2000 model, unless I boot Linux. Its gotten to the point I run Vista in a VM to communicate to the rest of the company while accomplishing real work with no slowdowns in Linux.

    Surely there is a point where the layers of security, the virus scanning, the stupid slow Sharepoint or Exchange server become a competitive DIS-advantage compared to your more nimble WEB X.0 or Mac/Linux competitors? Evidently its the 'cost of doing business" to live with this crap... but somewhere in the changeover to a Mickeysoft centric world we lost the rich complexity of competing platforms. We went MONOLITHIC and the result was rich feeding grounds for parasites to flourish.

    I think you could summarize the issues as: (1) Monolithic environments (2) the emphasis on profit (and speed to market) over proper design and testing (3) lack of innovation in hardware and software design.. a - basic approaches not changing for decades b - reliance on designs from monolithic entities (Intel/Microsoft/IBM) (4) a disconnect between the appreciation of human nature and its interaction with the ENTIRE solution space exposed by particular design approaches

    I think we had a richer technology world when something like the Amiga could re-envision a graphics chip, microkernels fought monolithic ones, chipsets were NOT all x86 compatible, busses were NOT all PCI derived (waiting on Intels next?), etc.. Perhaps the smart phone battle will shift tech innovation into high gear way from the WinTel 'axis of evil'. Its a hopeful start..

    104:
    In this war the monolithic adherents ended up with the widest distribution (i.e. Linux)

    Ahem. I think Mac OS X (and its smaller brother, iOS) may have more users than Linux. (Maybe not, though -- lots of embedded Linux devices out there as well!) A fairly moot point, though, as Darwin is still a monolithic kernel, even though it's based on Mach, which was supposed to be a microkernel.

    As for performance... that's a fairly good question; one thing I would want to know before trying to really answer it is what the power costs are for changing levels.

    (A couple of possibly interesting points, relating to the Intel x86 architecture: on the '286 and later, changing contexts from one ring to a different ring was very expensive, hundreds of cycles; changing context to the same ring was much faster. Hardly anyone ever used this, except for various *nix on the '286 and '386, where the FP emulator was in ring 3, just like the user process. The other interesting point is that x86_64 has a syscall instruction, and it can only go from ring 3 to ring 0. The syscall instruction is highly optimized for this, and much faster than the lcall or int methods used before.)

    This is all barely on-topic, since -- as I said earlier -- each of these decisions had valid reasons at the time, but we end up paying for them long after they would have been decided differently.

    105:
    Ahem. I think Mac OS X (and its smaller brother, iOS) may have more users than Linux. (Maybe not, though -- lots of embedded Linux devices out there as well!)

    I thought about this before posting, however at least in my line of work there are many Linux Servers out there... Many more than desktop, though perhaps not as many as embedded. The that design decisions made to optimize performance at one time still haunt us today. The INTEL protected mode ring0-3 solution being one that lost out. Why are we still optimizing basic instruction performance and adding more transistor count without investigating creative ways to put some of those transistors to use? Integration of GPU and CPU seems interesting so far.. But what I would be more interested in would be implementing a standard microkernel VM that everything else runs under... Think something like the EFI bios but running as a VM, everything else runs under it and a new protection ring designed just for it (no hacky shifting ring values to fool OS's). This VM OS is always around to boot from, crash to with network support, diag hardware issues, etc.. Maybe a Linux bios but as a VM? The point is this type of creativity has lost out to the monolithic model... everything must be compatible. Innovation is still possible at certain levels without disrupting the current ecosystem, but without competing forces our mono-overlords will only do minor 'improvements'. I do NOT see those innovations coming from ossified tech institutions. Even many opensource software products have devolved into the same WINTEL 'look and feel (or worse only develop now for WINTEL). I would like to see Mac OSX get more than 10% marketshare, or Linux more than 1% of the desktop if just to increase the rate of innovation and change among the established. Who on the hardware side still makes a suitable challenge to Intel besides ARM? (who Intel is gearing up to crush). And who to challenge Microsoft? It all boils down to lack of sufficiently powerful challengers to motivate the incumbents, meanwhile the rate of tech innovation crawls...

    106:

    And if you think all the above is bad ... Try looking this development The possibilites for really serious security screw-ups is enormous.

    107:

    Granted, there is quite a lot wrong with network security. Most of what you mention have been valid concerns for years, and most likely will remain for years further.

    However, I think a fair point to make here is fixing these would not be simple, or they wouldn't have survived this long. Second, I cannot fault Microsoft, the NSA, or any others for failing to see the far-reaching implications of their actions. I have trouble remembering where I put my shoes, and I doubt anyone knew at the time they were the architects of a new age.

    I see people here talking about why we're using passwords when something else would be more secure. I would like to stress that security is not the only factor, but usability, cost, current support, and social awareness play large and critical roles in adoption of new policies. As fast as things move in this field, we will always suffer the hangups of the human element.

    Last, and perhaps most critical: The trade-off between security and accessibility. Until the usability of security policies and applications improves to the point that my parents can use them knowledgeably, there won't be significant progress. People genuinely want to be secure, as shown by an entire industry that does more damage than actual hackers (read:antivirus software). Until we create tools we use that are friendly enough to promote to casual users, we'll continue to hide in the relative safety of our sandboxed browsers and well backed-up systems that can be restored at will while regular users are chucked en masse to the wolves.

    108:

    Yes, C was insecure but it is little more than an assembler. Are we expected to make our assembler secure too?

    Better languages have always exists. It my ligher moments I consider giving up C, C++, Java, Ruby and Python for a world of LISP.

    There has been almost no OS innovation since Windows 95 and its siblings arrived.

    Sure we have Linux but that is a me-too UNIX clone that seemed to succeed.

    Where are the neat little microkernels now we have cores coming out of our ears? Why isn't every user-level OS running on top of a VM. Why isn't my PC at work running both Window and Linux on a single box instead of me having two boxes at my feet?

    Binary compatibility is a wonderful thing but it comes aty the cost of platform diversity.

    Viruses would never have caught on if the world remained diverse. We have chosen a small number of operating environments and the result is a complete lack of resistance to viruses. Sure, MacOS and Linux are pretty much virus-free but that is only because they are such a small part of the demographic of OS use world-wide.

    Imagine if we had 20 different OSes with well-specified protocols for data interchange. That might be a different world.

    I once lusted after my own personal MicroVAX and now I type away on something the size of a A4 block of paper. It might not be secure but it is portable.

    109:

    Heteromeles: that's a very insightful comment. Do you have a blog? I'd like to read it.

    110:

    John @99 - as you've probably picked up I can only comment on this from an end-user perspective, which was that (after spam-filtering, etc) I get more physical spam (at a non-zero marginal cost to the person spamming me) than electronic.

    When I come back from two weeks away it's the pile of paper behind my front door that is more annoying than the state of my inbox. And this is midly annoying, but I rarely see articles complaining that anyone, without their ID being authenticated, can put things in post boxes or just walk up to your front door.

    All from an end-user perspective. From a service provider/technical point of view, clearly different. On this - Does spam really account for 95% of the cost of, say, running an ISP or providing email services to a large organisation?

    I stress I'm not denying it, just surprised it's that high.

    111:

    I totally disagree with you Charlie!

    Why not have a list "Where human evolution went wrong. #10: No tails".

    You've listed artefacts of whatever techoevolutionary algorithm humans follow.

    There are a million gazillion other things that are "wrong" in IT, like mathematical proof of program correctness, PERL, estimation techniques, PROLOG... I mean why not blame the QWERTY keyboard layout?

    Or the mouse?

    Or software patents?

    And what about the gazillion ideas and architectures and paradigms and frameworks and companies and what-not that have fallen by the wayside over the years?

    The architects of a successful idea didn't choose to make it successful, human society did. eg Technology is not responsible for technology's rise, human nature is.

    112:

    The Internet's End-to-End principle. It's unique in that it pushes almost all network intelligence out to the nodes on the edge. Compare it to a) the telephone/GSM network

    ....which really, really, isn't great in security terms. look at the presentations from this year's CCC. it was just annoying to connect to.

    113:

    Conflation of a user with code running on that user's behalf.

    Common access control systems (e.g. RBAC role based access control) associate a user with a process. That process then can do anything the user can. But a computer can do things much faster than a user thus allowing systematic exploitation of authority.

    See www.hpl.hp.com/techreports/2009/HPL-2009-30.pdf for a rundown of various access control systems, and the idea of ZBAC which seeks to erase this distinction.

    We need to get to a point where we can securely decompose software systems so that each piece of code has an amount of authority commensurate with its responsibility. See Mark Stiegler's talk on secure decomposition methods : http://www.youtube.com/watch?v=eL5o4PFuxTY

    Also see http://owasp.blip.tv/file/3917705/ for a rundown on how we can virtualize to bolt ZBAC onto existing systems.

    114:

    I really am with the people who point out that the fast pushes out the good. Security is often too complex for a human to properly manage, so we do the minimum we can and the rest is a game of find 'em and fix 'em.

    I do think you need to take a step back here and think about economically. Car locks are a pretty useless security measure (cars are relatively easy to break into), car alarms even more so (nobody responds to them). Yet we install them, because they raise the required effort to break into our very expensive vehicles. What we have in addition to that, is a police force that takes car theft seriously, which ensures that being a car thief becomes not too attractive of a profession. We need the same for identify theft. If anything is wrong on this level, it's that we're spending so much time and effort on copyright violations, while identity theft is clearly a much bigger issue for our personal security. Politics fail.

    There's also always a trade off in any security. I could install cameras and infrared scanners in my house, but considering the combined net worth of my belongings and the likelihood of a robbery in my neighborhood, it's just not worth it.

    Too those complaining about CS programs. You have a point in a way, but keep in mind that most software you're using now is written by trade school types, not CS grads. If you needed a MS to write software, half the software industry would collapse. There are places that teach security and where you can get the basic information and the degree necessary to test for and design secure software. If companies had incentives to pay for these people's expertise, there would be a lot more of them. There's no need to go mess with the CS programs, which are aimed at a very wide variety of computing problems, not software engineering problems.

    I think my new motto is "economy trumps morality". If you design the law against making the right thing (i.e. designing with security in mind) more profitable, then eventually the market will respond in the right way. If you try to convince people that somehow software needs to be more secure and they should totally take money out of their budget to work on that, then you'll just end up plugging holes in the titanic.

    115:

    Just to play devil's advocate - would we have the openness oriented computer culture of today (FLOSS, Linux, etc. etc.) if the root of todays' computing wasn't the odd mixture of scientists, engineers and military services, which resulted in a "try it, if it works, use it" ideology, but if the roots would have been something along the line of "secure, official, marketable"?

    116:

    Since people seem to be discussing microkernels, it may be worth pointing out that there are a lot of attacks that having a microkernel, per se, won't help you defend against.

    The main point of the microkernel approach, as I understand it, is to protect operating system kernel components from each other --- so that, say, a filesystem would not be at risk from flaws in a buggy graphics driver. The main effect, as far as application code is concerned, is that apps are also protected from knock-on effects; the buggy graphics driver only affects code that's trying to do graphics, and can't affect background processes which are just talking to the filesystem and the network.

    However, this does nothing to constrain the actions of buggy applications. Say you've got a POSIX-supporting OS, and a buggy PDF viewer, which allows a rigged PDF file to cause it to read your contacts list and mail it to badguy@mallet.com. The PDF viewer winds up doing this by invoking the filesystem and network services. It really makes no difference to you (or to the attacker) whether it's doing this by conventional system calls on a monolithic kernel (as under Linux), or by invoking separate filesystem and network-protocol daemons through microkernel-mediated RPC (as under, say, QNX). It's going to work either way. The real problem here is the buggy app --- and (as I've already said) the buggy security model that says that any app that you're running is acting as your proxy, and so naturally ought to have access to all your data.

    (On the other hand, at a lower level, I'm surprised not to see a little more love for hardware bounds-checked array accesses --- as seen early in the Burroughs B5000, and later in niche machines like Lisp Machines and IBM System/3x boxes, but never really mainstream. They aren't a cure-all, but it seems to me that they're more of a help than, say, separate instruction and data spaces, which really doesn't help much if the attacker has any idea what the contents of the instruction space look like.

    But to echo some other comments, bounds-checking is hardware that you have to pay for, which does nothing at all if the program is correct, so in the '70s and '80s, when gates were expensive, it was hard to justify the cost. And of course, they do make it kind of awkward to try to write a C compiler...)

    117:

    It the question is valid, then the answer is, "The invention of agriculture." If we were limited to small familial nomad tribes, then bad behavior would be self limiting. Don't gather and starve. Position seeking to avoid working comes from having enough surplus food to allow non workers to survive.

    When someone today says, "Where did we go wrong?" they always look at the negative values of progress. Just remember that billions of people are alive because of the technology we have developed over these millenia. If you want to 'go back' or stop using the fruits of progress, which of your neighbors are you willing to tell to get off the boat.

    The place you seek, "where did we go wrong?", is not technology, it's people. It would be easier to protect ourselves from the wolves who steal rather than work, if we had followed Shakespeare's dictum. First, we kill all the lawyers. That also is way too simplistic.

    In a world of increasing unemployment for the marginally skilled, where cheats are glorified in films and business, there is a lot of places where we went wrong. BUT all of them are people problems, not technology.

    118:

    Hmmm. The two biggest causes of mortality in hunter gatherer tribes are infections (primarily children)and homicide (everyone else).

    There's something to be said for the monopoly of force by an authority. Under normal conditions, it keeps the death rate down. Of course, under abnormal conditions (war, repressive regimes), it pushes the death-rate up, but there's a reason why people support evil empires.

    One could almost extend that to the internet: there's an enormous loss of money on the internet, but it doesn't kill us (at least until someone's stupid enough to attach the power grid to it: cue the "they already did that" comments)

    One general issue with the internet is that we don't have the equivalent monopoly of force by any group claiming authority. I don't think this is necessarily a bad thing, but it's not necessarily a good thing either. It just means that the form of information crime is different in each case.

    We're seeing something along these lines in the US, where the defense industry is pushing hard to have internet criminal activity classed as "infoWAR," not "infoCRIME."

    One relevant question is that if we militarize internet crime, is that a good thing? We might get a command and control structure, but what then? Getting pwned by the NSA is not high on my list of desirable lifestyle changes right now, and I'd rather use my laptop to write blogs, not launch botnet attacks at the enemy du jour in the name of national defense.

    119:

    I suspect that a part of that problem of using SSN to link obviously different records comes from the poor quality of the records. As a result of various events, I have three addresses which have been valid this century. I get attempts to contact a previous resident at this address for whom I don't have a current address. I've had newspaper reports which completely botched the names of people present (my grandmother's funeral, where they had a list).

    OK, I'm not in the UK, but I don't think that's different in the USA. The SSN and the other identifying data can contain errors, false matches can happen, and yet we assume that some arbitrary part of the data is perfect.

    Same name, same address, different SSN: is that a data error, or a father and son?

    120:

    Something still-current I've been concerned about:

    There's been some serious allegations about an open-source rival to a commercial program, used as viewer for an MMORPG. Essentially, that it can steal data, either from your computer or from the game.

    One of the most recent allegations could be easily checked by outsiders: a webpage used by the program on startup contained some crude DDOS features, easily visible in cached versions held by such as Google.

    It's just come out that the Developer who did this was the guy who had started the project. And he'd also done other stuff, involving a third-party DLL for which he had a licence for the code. He could put something nasty in the DLL, because he had access to the code. And what he'd done wasn't particularly secret within the group of developers.

    So, what other claims are there which we can't check for ourselves, but which happen to be true. Can I trust the particular group of programmers who let this happen? And does Open Source matter when, so clearly, few of us can effectively check the code? It looks like the Open Source nature of the program was irrelevant. Closed or Open, there were people who knew, and said nothing.

    121:

    Practically speaking, by use volume and stupid, the Microsoft failures are so far the worst that the rest barely count.

    o The MS browser language might as well've been designed by crackers - it has NO useful security atall. Authentication does little practical to help.

    o Yeah, MS has a security architecture now. But it's strictly security theater - purely about making the users' lives hard and does little make crackers' lives hard.

    o Judging by their actions, MS doesn't mind that atall.

    The results speak from themselves. 99% of owned bot machines run Windows. The infection rate's high enough that a new Windows computer'll probably be owned faster than it can be upgraded and locked down.

    It's pretty hard to build an even reasonably secure computing environment for military and financial intitutions under these circumstances, and few succeed.

    122:

    When most office computing was done on mainframes, viruses and other computer crime mostly weren't possible. ... mainframes and their operating systems were designed to protect against (for example) one process clobbering the memory space that another was using.

    There's nothing particularly magical about mainframes. The Morris Worm, which brought the internet of 1988 to its knees, spread via Sun workstations... and VAX machines. Granted, these were VAX machines running BSD Unix, but VMS itself was not "bulletproof" -- there were at least two VMS-only worms that spread via the DECnet protocol in 1988 and 1989 (the Father Christmas worm and the WANK worm).

    123:

    Charlie, how do you know that computer crime isn't a smaller proportion of computer-based economic activity than non-computer crime in the non-computer sector?

    Instead of a "parasitic drag", this might be a vast improvement.

    Reading the first article, the 2005 FBI survey found an average cost per company of $24,000. This is in fact shoddy reporting. From the original public FBI report, which is available on the Internet with a minimum of effort, of the companies that reported a financial loss to computer crime, the average cost was $24 K. 25% of the companies which answered the question did not report any loss. (And the second largest category of loss was simple theft of equipment.)

    Keep in mind, all the companies surveyed had revenues in excess of $1M, and doing some back-calculation from the pie chart on page 4 of the FBI report, the average reported gross income per for-profit companies in the survey was at least $36 M.

    $24 K / $36 M = 0.07%

    That's a rounding error, man. Not a parasitic drag.

    Incidentally, this is the question they asked: "What approximate dollar cost would you assign to the following types of incidents within the last 12 months? (business lost, consultant time, employee hours spent, ...)" Note that it includes the cost of paying computer security consultants, which I find amusing.

    Similarly, US GNP in 2004 was 12 trillion dollars. $53 billion losses due to identity theft is 0.4% of that. That's less than we lose in traffic jams, an accepted fact of life.

    So, while this discussion is interesting, it might be based on a faulty premise.

    124:

    Then the desktop-computer salespeople came in, and showed everybody cute-looking "toy" operating systems like the Macintosh (this was years before IBM made a PC).

    The first IBM PC was introduced in 1981; the first non-IBM-manufactured compatibles became available in 1982. The Macintosh did not appear until 1984.

    (Are you confusing the Mac with the Apple II, perhaps? The Apple II did start to make inroads into offices in 1979, when VisiCalc was introduced.)

    125:

    FYI:

    Top secret trials for NICTA's kernel breakthrough By Brett Winterford Aug 14, 2009 3:16 PM Intelligence community gets first dibs on secure OS.

    The development of the world's first formally proved general purpose operating system by Australian researchers has garnered interest from within the intelligence community as a potential solution to long held limitations to secure data storage.

    The Secure Embedded L4 (seL4) microkernel, developed by engineers at NICTA (National ICT Australia), is the world's first general purpose microkernel that is not only formally specified (mathematically described), but also formally verified - meaning that mathematicians have proven that every line in the kernel is consistent with this specification.

    Applying formal methods of mathematics to a "prove" the microkernel has taken NICTA boffins some five years - requiring a team of crack mathematicians and an open source verification tool called Isabelle, developed at the Technical University of Munich, Germany.

    Developers of systems based on the microkernel will now be able to mathematically prove their software is free from most errors. The kernel is impervious to, among other vulnerabilities, buffer overflow attacks.

    Professor Gernot Heiser, a professor in operating systems at the University of New South Wales and leader of the NICTA project, said the operating system is likely to be used to solve what he calls the "high paranoia effect - any system in which people are extra worried about security and safety.

    "It will be used in devices with high stakes, life and death stakes, such as medical, automotive or aerospace," he said.

    Heiser said the microkernel has been used in a 'demonstrator' system that is intended for production use.

    Heiser revealed the secure operating system is already under review by an unnamed national security agency.

    "It is being used in a device that deals with the storage of information in which file are of different classifications and thus need to be kept clearly distinct," he said.

    Traditionally, 'military spec' computing with data of a sensitive or nature have been kept on separate discrete computers [PDF - see section on Abstraction of System Description] from information classified at a lower or higher level. In extreme cases, classified materials are stored on purpose-built hardware devices that can only be connected to a network under strict conditions.

    The secure microkernel developed by NICTA, on the other hand, can be deployed in a virtual computing environment - whereby materials of different classification can be securely stored in software on the same piece of hardware, on separate but logically and securely separated virtual machines.

    "The intelligence community will be able to move from physical separation to logical separation, guaranteed to be secure by our kernel," Heiser said. "It's a case of virtualisation, with guaranteed isolation."

    http://www.itnews.com.au/News/152987,top-secret-trials-for-nictas-kernel-breakthrough.aspx

    126:

    Uh... i think you might have frothed on me a bit there, Jon.

    You mean VBScript when you say "browser language"? Not sure how a language per se is a security hole. Or possibly you meant ActiveX which, as originally implemented, was horribly insecure.

    MS does care. A lot. Ask anyone at MS how much they care. They're not idiots. At worst they know security is a massive financial risk to them if they don't fix it ("yes your honour, we knowingly did not fix that security hole which cost $2B in lost revenue to various companies.")

    And they'd love to fix those gazillion infected XP. They have a well documented solution for that OS: selling you a copy of Windows 7.

    All up your arguments aren't very rational, I'm guessing your under 25? How I wish for a segregated internet.

    127:

    DDoS attacks and encryption are orthagonal to one another - with a botnet of compromised hosts under my control, I can either a) launch a DDoS attack standing inside your posited transport-layer encryption tunnel or b) send nonsense packets outside the tunnel, but which the end-system must still deal handle.

    Actually, encryption makes it far more difficult to defend against DDoS attacks, as the targeted end-system or any intermediate defense system such as an intelligent DDoS mitigaiton system (IDMS) must first expend the computational power to decrypt all the traffic, including the attack traffic, in order to sort the legitimate traffic from the attack traffic. So, this makes the DDoS even more effective.

    It also means that IDMSes must in effect MITM the encrypted sessions in order to properly classify the incoming traffic, thereby breaking the end-to-end encryption model. Not good.

    Transport-layer encryption is actually iatrogenic in nature, as a) at the very least, traffic analysis is still possible because source/destination pairs must be en clair and b) in-depth classification of incoming traffic is either impossible or requires the MITM model listed above for everything. This is a huge computational burden which can be exploited by an attacker.

    And the transport-level encryption is useless, anyways, if one of the endpoints is compromised - as is the case with botnets, the #1 security threat on the Internet today. So, we end up with a whole bunch of additional crypto overhead which is a DDoS vector in and of itself, self-compromised cryptosystems due to the MITM issue noted above - and the bad guys are still reading the traffic, because they've compromised an endpoint. Not very helpful, is it?

    In fact, overencryption presents a serious security problem today due to the abov e- things are encrypted which really don't need to be encrypted, so we're faced with the traffic classification issue and are then pushed towards MITMing the end-to-end crypto ourselves, and of course those MITM points themselves can be attacked.

    IPv4 is a 25-year-old creaky, loosely-coupled set of communications protocols which were intended for a laboratory environment; the fact that they've been stretched waaay beyond their design parameters to build this thing we call the global Internet is a testament to the skills of their designers.

    Unfortunately, IPv6 has all the problems of IPv4, plus brings new ones of its own, like all the unfilterable ICMP required by Neighbor Discovery (ND), which constitutes its own series of DDoS vectors.

    And it brings them in hex. It is my contention that the consonance of the English letters 'B', 'C', 'D', and 'E' will end up costing untold billions of dollars over the coming years as IPv6 is rolled out (assuming it ends up being widely deployed, which, contrary to the common wisdom, isn't an assured outcome) due to misconfigurations resulting from said consonance.

    I largely agree with your other points. IMHO the best single thing which could be done to improve computer security would be a wholesale switch to the use of typesafe languages, but this will never take place in the real world.

    For more in a similar vein, see my AusNOG presentation here:

    http://www.ausnog.net/files/ausnog-03/presentations/ausnog03-dobbins-rokusaddos.pdf

    128:

    Somebody tried to hit us with the WANK worm, but we stopped it before it ever ran. I was security admin at the time.

    VMS had two things that no PC system, and no version of Unix I know about, ever did: first, a provably airtight kernel (probably because VMS was used for things like nuclear reactor design, where the government actually required provably bug-free code), and more importantly in this case, a whole array of privileges that can be handed out selectively (to a user and/or a process rather than just the all-or-nothing "superuser" status that Unix and other common operating systems use. For example, you can give a process the power to access certain restricted objects (say, the console printer or the fast batch queue) but not the power to modify files belonging to others.

    All of which means that the only way VMS will run a worm is if one of its operations staff is so clueless as to accept an install tape from an unknown source and run it from a privileged account.

    You're right, in a sense, that there's nothing magical about mainframe architectures. Indeed, there are workstations that run VMS, and other relatively solid architectures (HP-UX comes close). What's important is that the operating system kernel be designed from the bottom up to do security first.

    Which most up-to-date mainframe OSes did by 1980. But if you tried to do the same with the PC hardware available then, the machine would have run so slowly that nobody would buy it. Thus the various builders of micros, from the Altair and TRS-80 up to the Mac and PC, made the intentional decision not to bother. In many cases these began as single-process machines anyway, so the problem didn't exist until multitasking was demanded and added (or kludged in, as in Windows).

    But trying to secure a computer by installing an add-on product such as an antivirus -- or even a whole suite of them -- is like building a house with paper walls (do they really do that in Japan?) and then installing a $2,000 hardened lock on the front door. With those walls, you need to burn it down and start over.

    129:
    Then the desktop-computer salespeople came in, and showed everybody cute-looking "toy" operating systems like the Macintosh (this was years before IBM made a PC). In theory these machines were more efficient (measured in raw Hz or FLOPS) than mainframes, but that's partly because they weren't counting a lot of the operating system "overhead" and partly because they decided a lot of that overhead -- in particular the kinds of protection found in VAX/VMS -- wasn't necessary, and took it out.

    Nope, sorry, the first IBM PC, the model 5150, was introduced in 1981. The Apple Lisa wasn't introduced until January of 1983 and the Macintosh wasn't introduced until 1984. Get your facts straight.

    130:
    The problem stems from the approach of making personal computers by taking what were in effect games consoles like the Atari 400 and beefing them up to become Apple ]['s and IBM PCs and so forth; instead of taking minicomputers and further scaling them (and their operating systems and already fairly well developed philosophy of not trusting users) down to the personal level.

    Actually the Apple II was introduced two years before the Atari 400 shipped. Atari's first 8 bit computers, the 400 series and 800 series didn't have much in common with the Atari 2600 game console other than the CPU.

    131:

    @128: The "installation from trusted source" risk was already predicted by Thomas Ryan in "The Adolescence of P-1", an amazingly prescient 1977(!) novel. (Good luck finding a copy - given its age, it's quite a good story.)

    132:

    Surely, however, when you read a statistic like "According to one estimate pushed by the FBI in 2006, computer crime costs US businesses $67 billion a year. And identity fraud in the US allegedly hit $52.6Bn in 2004..... Extrapolate it worldwide and the figures are horrendous — probably nearer to $300Bn a year." then the economist's take would be to ask how much of that was loss and how much of that was redistribution. If a Nigerian spammer takes $1 mill off someone, that money isn't necessarily wasted - it could be doing more for the Nigerian economy than it was doing in Kansas. And the same goes for the more technological scams. A certain percentage of spam, for example, counts as loss, because it degrades the system, but it's not easy for anyone to say how much, let alone the FBI.

    133:

    Steveg: Yellow card. Further personal abuse here will get you banned and your comments deleted.

    134:

    Oh, man, I remember P-1: distributed, networked computing. Just-in-time compilation (that's what the magic-at-the-time aspect did, when it could "analyze" functions and make them smaller). Check floating.

    He was wrong about the compute power needed, and just how small a gigabyte of storage (RAM or hard drive) would become. But a pretty astounding effort for someone who doesn't seem to have published anything else.

    135:

    I blame Arthur C. Clarke.

    I THINK the first computer-virus was described in "The Pacifist", 1956, later collected as one of the "Tales from the White Hart" ......

    136:

    Typesafe languages, of course, have their own problems. Programs written in Java (sorta typesafe) are often still riddled with holes. I suppose there is a point where you can use the type system to make the system more secure (e.g. distinguish between safe and unsafe strings), but then all the dynamic languages have features that can be used to do same (I seem to recall ruby on rails allows you to mark tainted objects, or freeze/unfreezing them).

    More importantly, I haven't seen any evidence that systems written in python or ruby are somehow less secure on the web than systems written in C# or Java. The reason being, good security relies on the developer putting effort and thought into it and having the required knowledge to know when he's messing up. Even then, only a good security consultant will really be able to tell you how to do it, and that costs money.

    There's a recurring theme to my comments here, economy over doing the right thing. :-)

    137:

    Noted with apologies.

    138:

    JANET (the UK academic network) used 'backwards' addressing in the 1980s, though it eventually converted to inter-style sometime around the end of that decade.

    uk.ac.ncl was the domain for Newcastle University.

    139:

    Just to follow up, Charlie, this Reuters article estimates that organized crime's cut of the Italian economy is 7%. That's very old school crime, too, with half the cut in smuggling and human trafficking.

    According to the FBI's data -- not their analysis, which I agree is self-serving -- computer crime in the US is two orders of magnitude smaller than non-computer organized crime in Italy.

    And yet Italy survives, if rather unhappily, as a First World nation.

    To my eye, all the proposed solutions here appear to require replacing the large majority of existing computer systems with new (and necessarily untested, but let that be) architectures, operating systems, methods of data transmission, etc. Am I wrong?

    But this seems badly incommensurate to the scale of the problem.

    I know it's based on your dissatisfaction with existing kit, Charlie, but if your post had been made by a IT industry spokesperson, I might wonder if it was a ploy for more rapid turnover in product.

    140:

    Tim Freeman @ #8: The same problem is also present in multiple places outside of computers: Mechanical keys for locks...

    Or inside/outside: Dennis Ritchie, late 1980s, addressed a Comdex audience on security. Don't forget physical security, he said: 'I can't count how many times I've seen system and server rooms with their [leading brand] 4-digit pushbutton locks still set to [factory default nnnn].'

    As he reeled off the digits, half the audience suddenly got a far-away expression. Were it today, I'd expect a flurry of texting.

    141:

    I guess the computer revolution went a bit too fast compared to the previous ones (industrial/automotive/aeroplanes). 20 years after the first personal computers, more or less everybody in the Western World had one in their homes.

    According to Wikipedia, there are now some 800 million cars and light trucks on the road, but we're well on the way to hitting 1.5 billion personal computers.

    Also, computing artifacts (like C) tend to live a lot longer than bad design decisions in cars - if last years car didn't have safety belts/airbag/catalytic converter, that's not a very big impediment to fixing that this year. Not so with using crappy protocols, short of legislation which won't happen anyway.

    I think the computing industry has built up too much momentum, too quickly. I wonder what the brick wall will be?

    (As an aside: I'm currently reading, with great interest, Alvin Toffler's Future Shock - written 5 years before my birth! Has anyone here read that?)

    142:

    Charlie, One small nit with your statement "if you are serious about comsec, you do not allow listeners to promiscuously log all your traffic and work at cracking it at their leisure" is that from a crypto point of view, your system has to be secure against exactly this kind of attack. If your methodology depends upon the data being hidden, then it's not a good methodology to start with.

    143:

    Something I've wondered is if landline calls had become lots cheaper, sooner, maybe Internet-enabled business and banking would never have really taken off, and the substitute might have been, in some ways, more secure. Hear me out...

    If telephone calls (specifically data calls) had been cheaper, sooner, more people would probably have had modems. Given that, various businesses and banks might well have implemented their online businesses using banks of dialup modems. Arguably, this would have limited the number of points at which interception and interference could take place. Furthermore, if this had happened back in, say, the 1980s, they might well have distributed crypto material on floppies from their branches (before they closed so many) and so you'd probably have no need for Certificate Authorities to give you some clue as to whether the crypto material was genuine or not.

    On the other hand, the US has long had free local calls, and France had Minitel, so maybe it wouldn't have worked out like that after all...

    144:

    I think one of the major errors was the widespread adoption of imperative rather than functional programming languages.

    I realize more people can get their head around the imperative languages, and this makes it another "economics wins", but assignment statements really are dangerous.

    145:

    The C language had one factor that made it extremely attractive to programmers -- the Standard Library. At the time, CS types assumed that every system would have a different I/O system which would require a completely different API. As a result, programs could never be portable. C programs all use the Standard Library, and so can be ported easily.

    More generally, C is a system, not just a language. The compiler, linker, libraries, etc, all work pretty much the same way on different systems. Pascal systems (which I used extensively at one time) were wildly different.

    "Real Programmers program in C because C is the only language Real Programmers can spell."

    146:

    Agreed that functional languages have all sorts of useful safety properties that imperative languages lack. But they were generally viewed as special-purpose toys and theoretical curiosities during most of the formative years of the computer industry --- up until the 1990s at least. (You could try to quibble about Lisp, but the versions that saw industrial use had imperative features all over the place, and in practice, very very few programs were restricted to the functional subset.)

    So, increased use of functional languages is something that could be a big help going forward. But for that to be "where we went wrong", there would have had to have been a point where both functional and imperative languages were seen as a possible choice, and imperative won. I'm not sure that ever happened; when the choice was "Fortran or Algol or Cobol or X?", X wasn't Erlang, or some purely functional Lisp dialect; it was assembler.

    147:

    Actually, URLs seem like a non-issue. Many, if not most, non-expert users now simply use Google as their address bar. We are showing our age and geek culture every time we type a URL...

    148:

    I don't think anybody ever thought about the choice of language in "imperative or functional?" terms, no. (I don't think most people involved in computing would necessarily have recognized the language of the question before about 1995, and 1995 might be way optimistic; I have hit some difficulty with the idea of their being a distinction explaining XSLT to some relatively young C# programmers this month.)

    Charlie's category of mistake I took to be broad enough to include those cases where no one thought about the consequences.

    More generally, there's a rule with code that says you have to know when to stop, write down/abstract what you have learned, and throw out the current codebase and start again with the same objectives but better knowledge and tools.

    That's a hard thing to do; it's a harder thing to do with infrastructure. But I suspect that's a problem that seriously needs solving, how you declare some infrastructure a mistake and start over. Infrastructure is hard, and if you have to get it exactly right the first time you're in a much worse place than if you can try again.

    149:

    I'm with Graydon. I'm genuinely confused about how a language could lead to a security hole.

    I mean, it's just a compiled set of instructions to a machine. The holes are not necessarily in the word syntax of the language, they're in what the computer can be made to do.

    We've already noted that both Von Neumann and Harvard-style architectures are vulnerable to viruses, and we could even go so far (Godel's incompleteness theorem beckons) to say that all reasonably useful systems can be blown up by someone with malice and cleverness.

    So perhaps it doesn't matter what you say or how you say it: viruses happen, just as criminality does.

    Off topic: Enough people have asked about my own blog that I figured I'd put up a dunking booth to entertain them. Check it out if you're interested.

    150:

    Yes, I've read "Future Shock". (I consider the current antics of the American right as a fairly clear endorsement of Toffler's concept ...)

    151:

    It's not a matter of syntax, it's the semantics of the language and it's standard library. Just as with human languages, computer languages make it easier -- or harder -- to express certain concepts. In the case of computer languages, this can have an impact on security; if it's hard to write secure code, or easy to take insecure shortcuts, most programmers won't take the effort to avoid insecurity.

    152:

    "As an aside: I'm currently reading, with great interest, Alvin Toffler's Future Shock - written 5 years before my birth! Has anyone here read that? "

    I still have my old Pan paperback ..10th printing 1974.bought in 1975.

    You might like to try John Brunners " The Shockwave Rider " 1975. Brunner says in Acknowledgment .... " The "scenario " ( to employ a fashionable cliché )of The Shockwave Rider derives in large part from Alvin Tofflers stimulating study Future Shock, and in consequence I'm much obliged to him. J.K.H.B. "

    http://www.amazon.co.uk/Shockwave-Rider-John-Brunner/dp/0345467175

    My copy of that is Ex libris John Brunner in hardback and I've had it since you were ... Good Grief, how could you, how can you, be that Young?

    153:

    1 - spam IS NOT a problem, at leas if you use Gmail. Seriously I get spam once every few months. Admittedly this level of inspection and filtering does mean that Google probably knows more about me than my mother, but oh well.. In comparison I get junk mail in my physical mailbox EVERY BLOODY DAY.

    2-The big elephant in the living room is SQL injection and it's entirely a social/education issue. Secure parametrized SQL has been around for years but 99% of all SQL textbooks still give examples sorta like this: "INSERT INTO Users (name,surname) VALUES ("+x.name+"," +x.surname+")...........

    This is a deliberately simplistic example, but similar to what I remember my SQL textbook had. Students learn to code like this and some never un-learn.

    154:

    Re #2 - Tony Hoare did fess up to NULL being a really bad idea of his that's cost the IT industry a billion dollars. Personally I think that's a low-ball figure. See InfoQ for his admission of guilt.

    155:

    The problem with your analysis is that it assumes that the solutions to simple, existing problems would yield a world without those problems instead of what we actually get which is a world where those problems take on new forms.

    A few specific issues with your comments, though:

  • Yeah, everyone already nitpicked the C++ mistake to death. I was guilty of this assumption for years into the 1990s myself, as I'd stopped paying attention to C++ after CFront which was a pain to use. g++ is a nice and full-featured compiler that doesn't go through C, but more interestingly, C++ now solves many of the problems you're concerned with at a low level, especially strings, which C++ has several tools for managing more sanely.

  • Code vs. data is one of those nice theoretical arguments to have, but in the modern day, code and data are one. I have... may all the little gods help me... downloaded an HTML file which triggered a JavaScript file load which was, itself a virtual machine implementation which proceeded to execute Java Byte Code. I promptly re-formatted my hard drive and said a prayer to the goddess of /dev/null to protect me from such things, but the point is that processor native instructions are only a small fraction of the code that your computer executes on a daily basis.

  • Which brings us to: real protections. JavaScript is an excellent place to research the history of protecting a computer from the code it's executing. JavaScript stands on the shoulders of hundreds of advances in the field before it. Its bounds protections, access controls and context separation features are top-notch and should be the starting point for any such discussion.

  • 156:

    SQL injection, and its close cousin, the cross-site scripting attack (injection of hostile Javascript into rendered web pages), are actually examples of how a language with a rich type system, and a library which makes decent use of it, can make common screwups a whole lot less likely to happen by accident.

    The basic idea is to have multiple string-like types (SQL string, Javscript string, user-entered data), and to have the conversions between them perform whatever quoting is appropriate. The cross-site scripting protection in the upcoming Ruby on Rails 3.0 release (which has since been backported to 2.x) works like this, treating HTML boilerplate from templates and the like as "html_safe", and arranging for proper quoting to occur by default when that stuff is combined with "unsafe" stuff that came from user input, or whatever.

    The reason that cross-site-scripting is so hard to root out is that if you treat HTML fragments and user input indiscriminately as strings, using the same "string" type for both, there are a safe operation --- concatenating HTML fragments --- and an incredibly unsafe operation --- concatenating HTML and raw user input --- which look exactly the same, both to the programmer and at the implementation level.

    Now, it's easy to say that a careful programmer should know the difference, and avoid coding up the dangerous version --- but there are dozens of things that a programmer needs to keep track of in a program of any complexity, and it's inevitable that some of them will fall through the cracks. The Rails 3.0 approach isn't totally foolproof --- it's certainly possible for a programmer to mark some random string from the database as "html_safe" when it isn't. But at least in that case, there's a really weird-looking conversion in the source code which calls out for special attention; more importantly, it makes it a lot less likely that dangerous implicit conversions like this will happen by accident.

    (I should mention that this doesn't depend in any fundamental way on Ruby's type system in particular; the same basic approach is easy to work up in Haskell, for example, for people who like static type systems. It does help to have operator overloading, though...)

    157:

    From my personal history: I started working for DEC in field software support in July 1977. After learning PDP-11 assembler and being trained on RSX-11M, in Jan 1978 I had 4 weeks of VAX/VMS training, including VAX assembler, and installed the 1st VAX in Kentucky in May of 1978. 8 users in 256 KB, woo-hoo! Spent the next ~15 years as a VAX wizard. So when I started hearing about buffer overflow exploits, I couldn't understand how they worked. In VAX/VMS, all pages were read or write, and code or data. You couldn't make a data page executable. I was floored when someone explained to me that the Intel architecture was one flat address space!!! How stupid is that? VAXen (the end of CISC evolution) also had built-in string instructions that processed strings as descriptors: 16-bit string type, 16-bit length, 32-bit pointer. No null terminated strings for us, no sir! DEC could have owned the server marketplace from the birth of the PC, but: 1) they were arrogant as all hell, and couldn't make VAXes as fast as they could sell them such that: 2) they made the classic mistake of choosing margin over market share -- always bites you on the ass in the end. 3) "There is no reason for any individual to have a computer in his home" -- Ken Olsen, 1977. Wow, actually this was totally taken out of context?!?!? http://www.snopes.com/quotes/kenolsen.asp I loved VAXen so much, DEC broke my heart, I have become completely areligious re architectures / OSes / vendors since.

    158:

    Unfortunately, removing execute permission from stack pages doesn't actually make stack-smashing attacks less potent, without other measures; it just makes the attacker have to be a little more clever. Linux and Solaris systems generally do make the stack non-executable (when running on processors that make that possible, including recent x86es), but they'd still be vulnerable without additional measures, which I'll discuss below --- and VMS from the '80s almost certainly was as well.

    The most cogent writeup of the problem is a paper by Hovav Shacham that's already been cited a couple of times on this thread, with the somewhat unfortunate title "The Geometry of Innocent Flesh on the Bone". A better title (at least for those of us who don't feel compelled to drag Bob Dylan into everything) might be "How to make any program do whatever you want, using only instructions that were already there." He presumes that he's able to smash the stack, but not to execute instructions that are located there; as on VMS, he also assumes that the processor is configured to execute instructions only from a separate set of pages that he can't write. And yet, just by smashing the stack, he's still able to force the processor to perform essentially arbitrary computations.

    The basic setup is this: stack-smashing lets you overwrite the return address. A traditional injection attack has you overwrite the current return address to a buffer that's also on the stack, so the processor just starts executing instructions there. Now, if the stack isn't executable, that won't work --- but you can still force execution to start at any valid instruction address in the program, or its libraries, and that turns out to be good enough.

    Here's the clever bit. Say the stack looks like this:

    addr1 <--- return goes here addr2 addr3 ...

    And let's say that "addr1" points to a short sequence of code that ends in another "return" instruction. What happens when the processor executes the "return" instruction that sends it to addr1? It'll run the short sequence that ends in "return" --- at which point the processor will pop "addr2" off the stack, and go there. Thence to "addr3", and so on down the line. And so, in this way, you can cause the processor to execute many of these blocks of code, which Shacham calls "gadgets", in sequence.

    So, what can you do with this? With the right set of gadets, just about anything. Branch to a gadget which loads a register, branch to a gadget which adds something, branch to a gadget which stores the value somewhere else; you've programmed addition. Want to do something else? You just need the gadgets for that --- and it turns out that the standard C libraries, on both Linux and Sparc (from Shacham's subsequent joint work) have gadgets that are sufficient for literally any computation you like --- Turing-complete, in the jargon.

    (This includes unconditional and conditional branches, by the way --- you just need to find "gadgets" that adjust the stack pointer before doing their "return". And system calls are easy: just "return" to the entrypoint of a library system call wrapper. It'll run normally, and execute the normal "return" at the end --- which bounces right back to the gadgetry.)

    So, you can't put instructions for the real machine on the stack, and it just doesn't matter --- because you can put instructions for the "gadget machine" on the stack, and it's just as capable, if perhaps somewhat slower.

    Now, there's still one requirement here --- you have to know the addresses of the "gadgets" for this technique to work. And recent versions of many operating systems have tried to map libraries in at unpredictable addresses to make that more difficult --- the technique's known as "address space layout randomization", or ASLR. But you see where this is going... the next step in the arms race is for the attackers to try to ferret out information on what the "randomizer" is actually doing, so they can guess where the gadgets are likely to be.

    The bottom line here isn't that revoking execute permission is entirely useless; in conjunction with ASLR, it still has useful effect. But the hardware feature by itself isn't enough --- you have to be using it right, which is a surprisingly subtle business. And trusting any particular feature to keep you safe without thinking too hard about possible ways to defeat it really is the road to perdition...

    159:

    Note way back up there (comment #20, it seems), I pointed out that having the return address go in the same place as user-acceessible data was one of those places we may have gone wrong :).

    It wouldn't completely solve the problem (due to function pointers), but it would make it much harder.

    And, of course, having all user input fully validated and verified and limited would also do that -- you can't trash the stack in that case.

    160:

    Well, simply separating control and data stacks wouldn't necessarily be a cure-all either, if the control stack shows up at a predictable address --- it wouldn't be as easy to get at as in a traditional buffer overflow, but techniques used for heap-corruption attacks (clobbering pointers) might still allow you to write the separate control stack, at which point we're back to square one.

    If the control stack were in separate memory that couldn't be written except by call instructions, I'd expect that to at least make the attackers sweat for a while. But it's still an arms race.

    (Incidentally, one possibly bright sign is an increasing amount of code that's getting shipped around as instructions not for raw computers, but for various artificial VMs --- JVM, CLR or Dalvik bytecode --- which actually check bounds on array subscripts, and the like. Which provides another vector for trying to introduce extra safety features like this, without necessarily modifying the hardware. Though it does leave a potentially vulnerable layer of native code underneath; I'm sure I'm not the only person who's said that the most unstable thing about .NET is the quicksand they built it on...)

    161:

    @ 157 "2) they made the classic mistake of choosing margin over market share -- always bites you on the ass in the end." Live Apple, do you mean? They're STILL doing it! one reason I prefer MS, for all its amny faults - the arrogance of the Jobs legions is scary.

    There's the earlier classic, attributed variously to a mayor in early twentieth century Pennsylvania, and / or A. G. Bell: “I can foresee the day when there will be one of these (a telephone) in every town”

    162:

    The general feel of your comments seems to point to a major--perhaps the major--"bad" influence in computing that you yet neglected to mention: the "New Jersey" philosophy of building software as epitomized in Unix.

    The "cheap, quick and dirty" philosophy that is Unix pervades the Internet, which is no surprise since it was the BSD TCP stack and the applications built upon it that did more than anything else to give us the Internet we have (and the applications we use on it) today. (Note that this approach even won in battles within the Unix market: there's a good case to be made that, even with the AT&T lawsuit, Linux won out over earlier free systems, such as NetBSD, mainly because it had more of the "hack it now" rather than "wait and do it right" approach.)

    What's youre take on the idea that had the New Jersey philosophy lost, and we were instead running in a world of Multics and Lisp machines, the world would be a much better place? And what's your take on the economics of that ever happening?

    163: 153.1 - I'd agree. I'm careful about where my e-mail address goes, and get about 1 spam e-mail (include adverspam, Nigerians, mail-order brides etc in spam) a month. Similarly, I don't have a voice landline, am choosy about who gets my mobile number, and get about 1 cold call a year. OTOH I get several named and addressed pieces of junk snailmail most weeks. 161 - [cough] Linux [/end] ;-)

    Oh and I've a notion about someone being told something about how "the telephone will make it possible for Baltimore to talk to Des Moines", and responding "Does Baltimore have anything it wishes to say to Des Moines?"

    164:

    Graydon@144: Speaking as a functional programmer who writes commercial systems, the biggest change for me as I moved out of the C/Java/Ruby world was not FP per se (though it certainly strongly supported the change) but simply changing the way I think about programming to be more precise. I feel today that when I write a program in Haskell I have more in common with early Algol designers and implementors (particularly what Dijkstra became having learned from all of that) than I do with Lispers.

    (This even applies to recursion: a certain set of Algol designers went through some trickery to sneak a requirement for recursion past the committee; looping appears still to be the preferred iteration construct in Common Lisp today.)

    165:
    In contrast, we've known for many decades that if you want safe string handling, you use an array — and stick a pointer to the end of the array in the first word or so of the array.

    That's what the DNS SRV RR is for.

    Unfortunately nobody (I'm looking at you, HTTP) uses it.

    Why the hell did we end up sticking bloody WWW on the start of every name?

    166:

    If we had proper authentication and/or encryption of packets, distributed denial-of-service attacks would be a lot harder, if not impossible.

    Why? Don't requests from zombie computers overwhelm a server before authentication even begins?

    Would the act of authentication slow down the victim even more?

    I would love an internet with packet authentication though, just to cut down on the spam.

    167:

    I've already read Shockwave Rider quite some time ago. And my copy of FS is pretty old too - at least it has the "A Disturbing and Challenging Book" quote on it.

    I'm not that young anymore, but thanks for the compliment!

    168:

    Spam is a problem; you're just paying Google to deal with it for you. The fact that you're paying them in information about yourself and your correspondents rather than cash doesn't change the fact that you're paying (and paying well; information is Google's business).

    As it happens, Gmail looks (to me and many others) like a really bad thing from a privacy/security point of view. Read the ToS, in which Google state that they will collect personally identifiable information about you and your correspondents, and then state that they won't sell personally identifiable information about you to other companies. Notice the gap? Since when did it become OK to get free stuff for yourself in exchange for personal information about your friends (and the right to sell that information to third parties)?

    169:

    I think I may know one of these ALGOL designers - when I was being taught Generic Programming in Haskell by Lambert Meertens, he talked of those times and generally entertained me by explaining the Church representation of Booleans via analogy with kale with sausage.

    I only found out later that he had been chairman of the Dutch Pacifist Socialist Party :-)

    170: 168 (see also #153 and #163) - I don't use Gmail, and my ISP sends me a digest of "potential spam" which I can then retrieve for a false positive, or ignore, in which case they delete it after 4 weeks.

    I see what you mean in the specific case of Gmail, but that doesn't invalidate the general case.

    171:

    An architecture problem that I've seen in many many places, is the obstination with wich we continue to use common typing characters to delimit data in all language programs. This leads to many problems when managing user supplied data, and introduce the possibility of injection attacks. I'm thinking expecially about languages like SQL, or markup language/data structures like XML/HTML. Moreover, it seems that the original programmers, being english/americans, never tought very well about all the issues involved with charsets different from strict ASCII, leading to a situation where programs needs a lot of complexity only to deal with these issues, and more complexity leads to more vulnerabilities...

    172:

    If there is spam, someone is dealing with it; either you do it yourself, or you pay someone (one way or another) to do it for you. Most of the traffic a typical mailserver handles is spam. How is this not a problem? Either the spam is slowing the server down in handling legitimate mail, or the server is vastly overpowered for the amount of legitimate mail it handles. Or, usually, both. At the very least there's a cost in hardware and electricity.

    173:

    Uh Charlie, either someone's domestic appliances have achieved sentience or you're getting some blog spam on this thread...

    174:

    I'm not saying that someone isn't dealing with it. I'm saying that I'm not getting it, and I'm not running a black list (or a white list for that matter) several miles long either. Accordingly, I'm not finding it to be a personal problem.

    175:

    "What's youre take on the idea that had the New Jersey philosophy lost, and we were instead running in a world of Multics and Lisp machines, the world would be a much better place?"

    I think there's a comparison to be made with rocket engine technology, here. The Soviets lost the moon race by a narrow margin, but it turns out that they won part of the battle. Their rocket technology was so good that in the 1990s, NASA evaluated and began using their design for certain applications over our own designs. This was a rocket design from the late 1960s, and it beat out our modern 1990s rocket technology. Why?

    Because the Soviets applied the New Jersey philosophy in the large. They actually blew people up by taking large risks, changing multiple variables at once in large ways. American rocket scientists and engineers, on the other hand, took a conservative approach. They modified one variable in small ways until they perfected it and then moved on.

    When analyzing the Soviet design, they discovered that there was no way to move from the design they had to the Soviet design without taking large, multi-variable steps in between. There simply wasn't a path of viable rocketry between the two.

    Backing up to Lisp machines and multics... there were some great ideas in those systems, but there were some equally great ideas in BSD systems. It was a race of ideas and the one that won out was the one that ran on cheaper hardware and made networking easier. Those were the features that turned out to be most important.

    I also think there's a flaw in the logic of "cost." The price we pay for rapid progress is the infrastructure maintenance that we have to perform, but how much would it have cost to progress along a more stable path so rapidly? Would it have been possible? Would our economy have grown as rapidly? I think the technology you're pointing at has improved the lives of people throughout Europe, the Americas and Asia immeasurably. If the cost is 50% more sysadmins and malware scanning software at the benefit of 10-20 years head-start, then I'm not sure that's actually a cost worth measuring, no matter how large the aggregate number gets.

    176:

    No, what I meant is that doing it like this(Paremetrized SQL) makes you IMMUNE to SQL Injection: getJdbcOperations().update("INSERT INTO Users (name,surname) VALUES (?,?)",x.name,x.surname); This is a Java example, basically the ? is replace by the parameters you pass in and the program knows that it's data and not instructions and even if you pass in instructions it will be treated as data.

    However my university(and almost all textbooks and example I could find) use plain old String concatenation when teaching people SQL: INSERT INTO Users (name,surname) VALUES ("+x.name+"," +x.surname+")........... This makes data and instructions indistinguishable and most people who learned to do it the wrong way, continue to do it the wrong way when they write an e-commerce application. Maybe they remember to escape some characters, but that's about it.

    177:

    "I really dislike the thoughts that I assume must run through spammers and other con artists brains. But they're not that different from those that seem to come out by accident from CEO's mouths either. Maybe the real lesson there is that of needing better education in what each society values."

    But for most of these folks they just don't care what other people think or value. Period.

    178:

    spam IS NOT a problem, at leas if you use Gmail.

    No, you've offloaded your spam problem on to Google, and they've spent tens of millions on it while standing on the shoulders of the people who've spent a hundred times that.

    Spam is THE problem. It is at the root of all problems on the internet.

    (Quick, all those rooted machines forming the botnets that have been discussed earlier: What are those, used for, again? Oh, hey, look, spam, and it's little brother DDOS, aka spam-with-no-payload.)

    John Hughes@165: Why the hell did we end up sticking bloody WWW on the start of every name?

    Marketing. It's the same reason every URL ends in .com, and the very very first piece of good advice you'll get from any advertising or marketing person is to get yourself a .com.

    179:

    Since this posting went up, my blog's received roughly 500 spams. Relatively few made it into the comments because I have some fairly serious spam trapping and ninja warrior moderators -- but I'm in Australia, they're in the USA, and there's a gap of a few hours during which none of us are in front of the screen, which is why you got to see some of it.

    And which is also why spam is an ongoing problem, even if it's one we've rendered mostly invisible to the end users these days.

    180:

    Spam is as close as life gets to the Terry Pratchett novel in which burglary becomes a nationalised industry in order to keep burglars in work, keep policemen and insurance companies in work, and as a secondary aim, keep the burglary rate down to a tolerable level. We're approaching the point where it's possible to experience very little spam, but only because huge efforts and profits are made by the security industry.

    One wonders what the point for the spammers is these days - looking at the queue for high-traffic Fistful of Euros, what strikes me is that the Bayesian cunning of the last few years has gone, and now it's back to crude bulk. Perhaps the spambots are free-running, as part of a purely autonomous ecosystem. (Although you wrote that, sort of.)

    181:

    Mistakes of the past. Letting engineering types make waaaaay too many decisions about business oriented things. One of the biggest is the dropping of fixed and/or floating decimal point operations in favor of binary float. We've been chasing down rounding errors ever since. I worked with a computer setup designing business software for the insurance industry that had native float decimal in the 80s and nearly had an anurism when I realized that the next big thing was all based on float binary. Can't tell you how many accountants put out spread sheets that didn't add up with pencil and paper due to this.

    And this continues into the current time as Google leads with everything vetted by engineers. And it's starting to show.

    182:

    SPAM

    There are many folks who don't get much SPAM of any kind, stopped by their provider or not. They tend to be folks who understand the issues and in general are not people who then to trust everyone. But they are not a majority and are not the problem.

    In an office of 25 people where I run a mail server 95% of the SPAM trying to get through comes to just 2 people. Both did "bad" things. Signing up for free information on multiple web sites. And now it's basically whack-a-mole. One of them admits easily they did it and will never do it again. The other person just cannot admit they might be at fault in any way and keeps demanding we stop it. Even the Staples ads that she signed up for and refused to click on the un-subscribe link.

    SPAM is much more a people problem than a technical one. But Bots are a scourge of my existence.

    As to folks who think that Google isn't looking at your email as they don't have a Gmail account, many 2nd and 3rd teir ISPs and companies outsource their SPAM fight to Google. They just don't tell you.

    183:

    C and programming.

    I despise C as a language for business. But that's a battle that is long lost for at least another decade or so. And when I say C I'm including C++. C just makes it too easy to mess up. After all it was designed as a way to generate efficient assembly code without writing assembly.

    But most of our issues is that there are too few "good" programmers. Not creative geniuses. By good, I mean folks who can write DESIGN and WRITE code taking into account performance, security, maintenance, usability, and so on, ALL AT ONCE. Maybe 5% can do it. Maybe 1% do it well. The rest can do 1 or 2 but not the others. They just can't think in those terms. I got exposed to this big time when two of us good guy (yes I consider myself one of them) had to interact with the "good" folks from several major corporations. Out of the 50 or so folks we dealt with fewer than 5 were in that 5%. But the rest were considered the cream of the crop for those companies. Which means the regular folks doing "regular" programming and maintenance were turning out what I'd call mediocre code. At best.

    184:

    I don't think Vetinari had nationalised the Thieves' Guild.

    It was adequately efficient as private enterprose.

    185: 179 - Kudos. If Roy hadn't mentioned it, then I would, simply because it's so rare, and the usual reason for it being rare is that you have active spambot hunters. 182 - Why does this not surprise me? I'm not scoring advertising e-mails from Amazon as part of my spam because I'm sure I could stop them, but don't really want to. 184 - Exactly. Similarly, he didn't feel the need to do anything about the Assassins' Guild because they took pains to minimise the amount of collateral damage they caused in dealing with a specific inhumee.
    186:

    I wager that today malware over email is a far worse problem than spam over email, as it is the major vector for botnets (of course, they are related problems).

    Remember the days when you could tell people, "hey, that email virus stuff is a hoax! ignore it! You'd have to decode an attached file then run it for anything bad to happen."

    Then Outlook came along, putting the execution of random code a mere two clicks away. And conveniently the code would email the malware to your entire addressbook...

    187:

    It depends on your definition of spam. The classic UBE definition ('Unsolicited Bulk Email') deliberately didn't include any attempt to categorise spam on its intent - it didn't matter whether it was an attempt to sell you something, to proselytise, or perhaps to effect a DoS, if it was unsolicited, it was bulk, and it was email, then it was spam.

    That definition includes malware arriving by email.

    188:

    Yes, this is the definition I'm still using.

    189:

    Since this posting went up, my blog's received roughly 500 spams.

    BTW why don't you have captchas here? Just curious...

    190:

    Boy did I mess up the quoting there or what. Cut and paste fail.

    I meant to say:

    Second, for IP, the concept of how you connect to another host seriously flawed. Traditionally, when you wanted to connect to, say, SMTP on another machine, you looked up what the port was on your system (e.g., using /etc/services).
    That's what the DNS SRV RR is for. Unfortunately nobody (I'm looking at you, HTTP) uses it. Why the hell did we end up sticking bloody WWW on the start of every name?
    191:
    Why the hell did we end up sticking bloody WWW on the start of every name?

    Assuming you really want to know, or others are curious: because the web servers were initially low-profile services, not considered terribly important. And the technologies involved were still changing often, meaning frequent software, configuration, and content changes, and, as a result, frequent reboots. The main domain (e.g., example.com) may not have had any actual A records, only NS and MX records, and so it would not have been possible to put a web server there at all. And eventually, the www.name.com template became so common that you could just enter name into a web browser, and it would try www.name.com first.

    DNS SRV records are starting to be used -- but at the time in question, nobody had ever used them (I'm not even sure when they were introduced).

    Isn't internet history fun?

    192:

    " Isn't internet history fun? "

    History of Computing is FUN too .. and not too great a digression from the thread...

    See here ...

    " Ellie Gibson joined Ousedale School students learning how to program BBC Micros at the National Museum for Computing in Bletchley "

    Gosh! At Bletchley! ...

    http://www.bbc.co.uk/news/technology-10951040

    Mind you it didn't seem to me to be all that amusing way back when, as a Audio Visual Systems technician I was obliged to do Mighty Deeds of Educational Politics so as to avoid being made responsible for an entire room full of the damn things...they appeared to me to be breeding overnight in some sort of eldrich way so that one roomful would become two or three rooms full any time soon.

    It is my own opinion that People LIKE Me .. though not ME of course, for I am entirely Blameless and you cant PROVE otherwise, SO THERE .. were responsible for the entire Mess that me have now in as much as we did not shape impressionable young minds toward producing a more perfect system.

    193:

    You know captchas are crackable?

    We have other, automated, spam detection measures that don't annoy the regular posters.

    194:

    Has spam been covered in international treaties, along the lines of "This country promises not to knowingly harbour spammers on pain of horrible penalties"?

    195: 193.1 - I didn't, and I can think of a few blog owners that I sincerely wish did! ;-) 193.2 - Thank you for that. Seriously; Captchas are annoying. 194 - Can we think of anything horrible enough? ;-)
    196:

    guthrie: no, it hasn't, but doing that would be difficult ("what is spam, precisely?") and there doesn't seem to be much need, given that most spammers are doing something else that's already illegal, and the spam is just a means to an end for them.

    Charlie: "less annoying to regular posters" is great. This is yet another area where you've got it right. Though there is one little thing that caught me: the failure mode when JavaScript is not enabled appears to be designed to disguise that this is the problem, and, as a NoScript user, that had me going in a little frustrated circle for a while.

    (I feel about JavaScript almost the way you feel about Windows: if you're using it by default, you almost deserve to get 0wned. Not that anybody deserves that.)

    197:

    See updated instructions in the comment form below ...

    199:
    but doing that would be difficult

    Worse than that, I think -- it'd essentially amount to a Great Firewall for every country. And as much as I dislike spam (and I dislike spam quite a bit), that would worry me.

    200:

    This is Ironic, as to visit this blog today I typed "Charlie's diary" (sans quotes) into my Firefox address bar. It used Google as the directory service, another point of failure (ugh!) but at least Google has servers *everywhere.

    201:

    Spamming 201, 202.

    203:

    Lawrence Lessig, croaking out a final blog post:

    Or maybe had legislatures devoted 1/10th the energy devoted to the copyright wars to addressing this muck, it might be easier for free speech to be free.

    Context:

    Third, even if I could, and even if the work I was doing meant I should, there's an increasingly technical burden to maintaining a blog that I don't have the cycles to support. Some very good friends -- Theo Armour and M. David Peterson -- have been volunteering time to do the mechanics of site maintenance. That has gotten overwhelming. Theo estimates that 1/3 of the 30,000 comments that were posted to the blog over these 7 years were fraudsters. He's been working endlessly to remove them. At one point late last year, Google kicked me off their index because too many illegal casino sites were linking from the bowels of my server. I know some will respond with the equivalent of "you should have put bars on your windows and double bolted locks on your front door." Maybe. Or maybe had legislatures devoted 1/10th the energy devoted to the copyright wars to addressing this muck, it might be easier for free speech to be free.

    Read the rest.

    204: 204, ironically, looks like more suspected Spam.

    I repeat from #195 - Is there anything sufficiently unpleasant to do with these people?

    205:

    JavaScript: OMG.

    Your posting led me to try to Do The Right Thing and run Firefox with NoScript. I say again, OMG.

    Trying to do this on the modern web is a really horrible experience. You get halfway through your interaction with the airline, and it goes tit's up, you finish typing up the comment form, and the Captcha fails, making you start all over (or just give up), gethuman.org doesn't work, and on, and on, and on...

    This seems like an extremely good illustration of where we really went wrong. Trying to be secure is just too crippling.

    And even if you are willing to be crippled, the tools are terrible. I mean, I know the NoScript folks are trying hard, but let's face it, the feedback they give is nearest thing to useless. Something goes pear-shaped, so you click on their little bar, and what's the choice you get?

    a. Trust these guys forever

    b. Trust these guys just once (no condom just this time...)

    c. Don't trust these guys.

    All NoScript really tells me is how badly I'm crippled if I don't allow the script. It doesn't let me evaluate threat versus payoff.

    There isn't a bloody clue what you are trusting the JavaScript in question to do. All you have to go on is the name of the ostensible source. Even then, the emphasis should probably be on ostensible. Let's say you trust the intentions of the people who wrote the site. Unless they are Google or one of a handful of others, why should you trust that their intentions will carry through? Did they really write all that JavaScript themselves? If so, are they competent to write it securely? If not, what's the provenance of this software? What's worst is knowing that the airline's competence isn't to be trusted (just by interacting with their website), but what choice does one have?

    I have a postgraduate degree in Computer Science, and I can't make any practical use of the information NoScript gives me. It's like having a screening test for a disease for which there is no treatment. All it can do is make you miserable. And I'm willing to bet that NoScript is better than just about any other JavaScript blocker out there.

    206:

    Ironically, your post is now #204, after the post in question got marked as spam.

    207:

    I know The Raven of old, from elsewhere on the net. Doesn't look spammy to me.

    208:

    Sean, Good point, well made. :-D I will say this only once. I will not take it unkindly if a post made primarily or solely to highlight suspected spam is deleted by the investigating moderator.

    Charlie, duly noted. My reasoning was that I didn't recognise the posting name from here or elsewhere, they'd embedded a URL under their name, and done one of those "Great site, lots of useful info" posts that don't contribute to a discussion and are the trademark of most human-generated spam IME. Ok?

    209:

    Sure: but a quote and link to an essay by Professor Lawrence Lessig is not exactly in a class with a random effusion and a link to a clean-your-credit-record/herbal v*gr/network marketing site.

    210:

    You forgot to mention that too many protocols are text based on the internet.

    As far as encryption is concerned, it should be noted that encryption is useless unless you can make authenticate the other end of the conversation. In practice evasdropping on network connections does not happen. Attacks are made on insecure ports.

    211:

    paws4thot didn't refer to the post by The Raven; there was an obvious spam post that got through and was right after The Raven's post and before paws4thot's post, I marked it as spam, it was no longer here as post #204, and so everyone got confused.

    212:

    Thanks mate.

    Also for future reference, if I'm posting a hyperlink as a reference to/in my post, I'll post it as {http://...} in plain view, not embed it in my URL field. ( {} characters used to make absolutely certain the illustration doesn't even think about being a hyperlink.)

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on August 20, 2010 10:39 PM.

    Zoom was the previous entry in this blog.

    Active down under is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda