Well, except if the user clicks "Proceed anyway" or equivalent when the browser says that the certificate does not match the server. Not that there's much you can do about that, either.
And now I'll doff the work hat and say that this is probably a problem if somebody reuses the passwords, for reasons already said. And it's still more secure than the current setup.
]]>ISTR there's a cool xkcd about harvesting passwords from habitual password re-users by setting up a dummy site with a registration page. Forget how long ago.
BTW Mr Stross, I read Rhesus File last week and it has reminded me just how similar your working experience was to mine. I didn't take up writing (yet... I have always wanted to... I'm sure there's still time), instead went into architecture then management (PHB). But your example showing it's possible is very reassuring and thank you.
Dunno if you were still around the scary devil monastery in the early noughties, but I was there and at clpm a little at that time. Did web applications with Perl, but possibly the coolest thing I did involved cobbling together an NMS-like tool from Net::SNMP, rrdtool/mrtg and a database. Managed about 20 floors worth of network gear. Could map a logged-in user to physical location via switch port and structured cabling. But what I remember of course is my eyes swimming from screen after screen of numeric oids and getting unpleasantly intimate with a bunch of Nortel MIBs.
But that was then. Today I'm going to express my team's work allocations as post-it notes on a whiteboard, because we have been instructed we now use agile methods for management. No-one has advised how that is supposed to work, of course, but informal advise is to focus on appearances. I've considered holding stand up meetings where the team literally forms a scrum and tries to push over the wall, but that might be laying it on a bit thick.
]]>Encryption's not needed? Fine, but make it possible anyway. Google's push is decent here, as much as SSL is a crutch, it's better than the pothole (that the plain HTTP is).
]]>Er, I get a username, not a URL, with my Google-provided OpenID account here.
]]>reminded me just how similar your working experience was to mine.
I think there are many of us. Our paths diverged after Charlie went to writing full-time, but the earlier similarities are eerie.
]]>I agree, most organizations do not use self-signed certs anymore, except occasionally in development and testing environments. While there are problems with the cert signing procedures, I think this is a Good Thing to do.
However, there still is some bad usage culture. Earlier this year (I think, recently anyway) a large Finnish bank (or a Finnish branch of a bank) forgot to update its certificate used in the online banking system. The certificate expired, of course, and customers' browsers started complaining about an expired certificate. The instructions from the bank were (paraphrased) "Just ignore the warning, our online bank is still secure". This is of course quite the opposite of what you should do when your online bank's certificate has problems.
There was much screaming at my workplace when this happened.
]]>We had similar problems. We found that getting the team to "make a waterfall" got quite messy and started to smell; and pairs programming always ended up with a fight over who got the chair in front of the keyboard; so we were quite excited by the thought of Extreme Programming.
Unfortunately, we've found that lycra isn't flattering on the middle aged, and it's impossible to keep the laptop's network connection up while skydiving or jumping a motorbike over a ramp.
]]>The father is an acquaintance (and CS prof at the Air Force Academy). Somehow I don't think he needs to worry about his son dying on Mars.
]]>Great to hear you're turning on HTTPS. I know it is a hassle - however after the last year, turning on HTTPS is a service to everyone that visits your site. Recent revelations have shown that anything that is not HTTPS can be used to inject attack vectors against users. (See this op-ed for more info: https://firstlook.org/theintercept/2014/08/15/cat-video-hack/)
Cheers
]]>Doesn't change the basic principle at all: it doesn't matter if the data comes from youtube or from warez.hackerz.org, you don't trust data from the internet . That's why releases and official downloads are supposed to come with checksums.
That said, I see that using web-of-trust CA certs helps with guaranteeing authenticity on https connections. If you've compromised the network you can compromise DNS, but you can't fake CA-signed certs. Though if you have the network you can insert your own CA cert in the first non-https Mozilla download... well or anything else the user will go ahead to install without questioning its authenticity. That is another way of saying that you're f@#ked anyway unless you're really deeply paranoid, on a level at which it isn't really worth your time to function.
Note that deep packet inspection on proxy traffic in an enterprise environment is normal, since in that case you manage the client devices and can install whatever certs you like. And this is in fact exactly the same as MITM from the point of view of either the client or the remote website. BYOD arrangements pretty much hinge on the organisation being able to do that.
]]>time to blow off the cobwebs
I do hope you have provided alternate housing for the spiders/dustbunnies required for system stability in the future?
]]>