Back to: The End of the British nuclear deterrent? | Forward to: Popcorn Time

The internet of decay

The internet as we know it is nearly 25 years old (that's the world wide web: the pre-web internet is a lot older, and not far off its 50th birthday, but would be unrecognizable to most people today). We're using it for purposes the designers never anticipated, and a myriad of hopeful experiments flourish on the web ... and sooner or later die, or crumble into gentle decline and benign neglect.

And sometimes the neglect is not so benign.

Recently the news broke that internet-connected toys were being hacked. CloudPet stuffed animals have a web connection that allows kids and their parents to send and receive voice messages; they're sold as "a message you can hug". But it turns out that their login database was unsecured and discoverable via Shodan, "the search engine for the internet of things", and huge numbers of logins have allegedly leaked (they didn't password protect the password database—or encrypt/hash/salt the passwords in it). Voice messages for CloudPet users were stored on Amazon's AWS cloud service without authentication, so I leave the mis-applications of this service to your imagination.

The worrying part is that the toy manufacturer was extremely difficult to contact and doesn't seem to have any timely process for monitoring or fixing defects in the service (not to mention probably being in violation of the Data Protection Act if the toys are sold in the UK). And of course the toys will probably out-live the company; the half-life of a corporation is 15 years (for a start-up it's about 18 months) but the half-life of a beloved toy may well be considerably longer.

Note that Shodan isn't to blame for the sloppy security practices of a novelty toy manufacturer, any more than Google is to blame for the existence of child pornography on the internet. But there are a lot of novelty toy manufacturers out there, and more and more of them are going to go bust every year, leaving broken toys behind them with no internet connection ... or worse: be taken over by larger corporations who will simply fold the developers into their own teams, continuing to pay the rental on unattended and unpatched servers for the obsolescent product lines until nobody screams when they turn them off. (Google, Nest, Jawbone, I'm looking at you.) Then there are the unattended child monitoring cameras with microcontrollers running unpatched ancient linux distributions with default passwords. And the home security systems/burglar alarms. Internet-controlled smart front door locks (there'll be an app to break that). Network-controlled drones probably aren't a thing yet (unless you're the USAF), but they're doubtless on their way. Internet-connected vibrators have already triggered lawsuits; if you put the data from a We-Vibe together with the owner's NetFlix or smart TV watching habits, or PornTube click-trail, you can probably build up an interesting picture of their predilections. And so on.

What are the unanticipated downsides of the decay of the internet of things, combined with poor security practices and developers going bust and leaving infrastructure in place as abandonware? You probably know I've got a vivid imagination by now—what haven't I anticipated?

591 Comments

1:

Harry Harrison had a take on this many years ago in "i always do what teddy says". Given the Barbies linked to IBM Watson we are almost there.

2:

Might be tempting to funeral companies to hack un-maintained home control systems and shuffle the elderly off this mortal coil by interfering with their heating in winter/aircon in summer... Or just stop the alert systems working when grandma has a fall.

3:

Going to need to cogitate on this one a bit. But Cloud Toys as a McGuffin are likely to create a book/film Sub-Genre of their own.

Collect all 10 Cloudpets with the USA Nuclear Keys stored on them!

Ransomware made Flesh : Give us 1Bn Credits or we won't tell you which Sex Doll we downloaded your Significant Others Consciousness into. Combine that one with your idea from Rule 42 for some truly sick scenarios.

4:

Internet-controlled smart front door locks

I find your faith in mechanical locks reassuring... you probably don't want to look up "bump keys" or "key guns" on YouTube :)

How long before the Police considered the presence of a door-hacking app on your phone as "going equipped", much as if they found out that you carrying a set of lock picks? (Which reminds me - what is UK expectation regarding police searches of smartphones?)

5:

Adware on your smart TV that you can't get rid of.

Your lights randomly flicker until you pay the ransom.

Americas funniest celebrity baby monitor camera videos.

I spoke about the risks at LCA Hobart on Jan in "The Internet of Scary Things", slides and video here: http://christopher.biggs.id.au/

6:

I already know enough about lock picking, thank you very much.

Luckily it requires physical hardware and a modicum of practice to pick up — casual thieves are unlikely to put in the effort. Downloading an app that does it for you is a much lower threshold of skill to step over, and the app doesn't need to be on your phone until right before you use it, and if the police are searching your phone after you burgled a house then it's just one more item on the charge sheet.

7:

Adware on your smart TV that you can't get rid of.

Already happened, more than a year ago. (Smart light bulbs aren't common enough for them to be a side-order of grief, yet.)

8:

Poltergeist simulation through home energy management system connected to HVAC, lights, white goods etc. Stalking potential or script kiddies version of prank phone calls?

9:

Assassination through a suborned autonomous vehicle being forced to crash.

Peter F Hamilton used similar plot items in various books (fallen dragon, commonwealth saga)

10:

I'll bite on some social things.

I think we'll see a splintering effect. There will be a body of people who will basically rise up and say screw it and just accept most peccadilloes. We're gradually seeing more and more things that as I teen you never heard of being openly talked about. There will be a few that will remain, paedophilia and bestiality seem likely to remain beyond the limits of acceptability, but for a big chunk of society we'll see a rise in "If you do it at home, alone or with consenting adults, so what?" There will be a counter-movement of neo-puritanism, with an anti-technology mix. A sort of neo-Amish flavour if you like. The leaders of this movement will loudly decry both the evils of the acts, and the evils of the snooping that might reveal that they visit a hooker, like dressing up in latex or whatever they do. History tends to suggest that those who shout the loudest about these things often do them…

I wouldn't like to guess which one will win overall, in any country. I'd have put money on Hilary winning last November, but Trump in the US and the GOP in the Senate and Congress seem more likely to push for the latter. However, different states might well go different ways. Here in the UK, Callmedave with his more liberal attitude to marriage, I would have said would have been of the former camp. Madam May seems to be ruling her MPs with a rod of iron, and I'm not sure how that would spill over to this. She's generally in favour of more spying on the general public, so I think she's more likely to let it go.Some places, like France, you would think would go for the acceptance route, they by and large don't do sex scandals in the same way as the rest of Europe, but they do have that Catholic thing, so who knows? I don't really know enough about other countries and their politics to hazard a guess.

11:

Push a person to psychosis through randomly disturbing their sleep by flickering devices on and off during the night.

Subtle is better, so rather than lights, use standby power on appliances or single beep from connected devices.

Amazon echo style devices could be even better if you can use very high or low frequencies that disturb but can't be heard if the subject fully wakes.

12:

Even in permissive societies there may be other private aspects that become shaming.

An example in our increasingly health conscious environment maybe exposing someone's shopping and fast food habits, especially in countries with groaning socialized medicine. Fat shaming already exists, "McDs" shaming could be next.

13:

Suborned "autonomous" vehicles is/are going to be a real problem for the wonderful world of the self-driving car - but see also "Halting State"

I was thinking of a different sort of decay on the internet Like this - backwards to ignorance & superstition....

Very simply: DO NOT HAVE any internet of things items in your house.

14:

In the longer term (say, 25 years from now), all signs point to Vernor Vinge's concept of the "software archaeologist" to be justified: new code will need to operate with massive substrata of old broken code at a scale where actually understanding the layers of older code won't be possible and actually trusting the layers of older code won't be remotely wise. This leads to practices surrounding web services similar to those surrounding land and construction in the physical world: complicated zoning to prevent cascade failures of ancient legacy systems, various tests to determine whether or not something fragile is likely to be affected by new developments (similar to checks for protected ecosystems or historically important ruins legally required prior to constructing buildings), and (probably) a distinction between expensive shiny professional software and unlicensed semi-illegal favela software.

(All of this exists to some extent, but not to nearly the same extent as one would think: professional 'enterprise' software tends to be flaky and unreliable, particularly when it's used only internally, because there isn't enough oversight and people are willing to write short-term hacks to prop it up; the number of developers is growing significantly while most developers haven't been trained in professional best-practices, so the average quality is shit; the half-life of services is tiny but nobody cares too much because of the churn in employment and the ready supply of developers. As soon as wages stablize for software engineers -- as soon as the software engineer valuation bubble pops -- becoming a mediocre programmer will no longer be lucrative, so average competence will increase as the low end drops out and development will become slower and more careful because the total number of available developers will decrease.)

The software world will remain 'haunted' by the poor decisions of the past, and we'll continue to paste cruft over kludge, but we'll be doing this in order to prevent permanently breaking important services that depend heavily on networks of technical debt that can neither be understood nor circumvented.

In the shorter term, I expect to see small-scale computer crime by amateurs get a resurgence similar to the mid-90s, as the influx of mediocre developers temporarily made it so that many services are easier to exploit via technical means than via social engineering. (For instance, injecting malware into npm left-pad.) State & organized crime organizations in this space won't have any reason to bring these people into the fold (they aren't competent; the landscape is just easier temporarily), and have every reason to remain subtle.

15:

I can see some sort of friendly Citizens United-style legislation coming along saying that because so much of the amazing future relies on the mass interpretation of behavioural data, it's no longer a legal right to opt out of data collection. Of course the government isn't allowed to surveil citizens any more, but when they can simply buy the data from Amazon's new mandatory Echo installations for the cost of a tax break, why would they need to?

16:

Imagine an island near Europe, in financial, economic, political strife. Data center operator goes bust or close to. How much to buy a used data center?

17:

Where one can read, another can write. How about an alibi service where you enter the time, date and location you want to have been in, and your house electricity meter, HVAC, lights, Amazon purchase history and Google Music history are adjusted to show that you were, in fact, at home and not with the mistress (or robbing the Bank Of England) at the time. If you also leave your personal phone at home (expedient, in both cases), your call records and browser history can also be fixed. Now let the police prove you weren't home (or in a hotel in Berlin) at the time of the crime.

18:

Have you read about this hack?

https://www.theregister.co.uk/2016/11/10/iot_worm_can_hack_philips_hue_lightbulbs_spread_across_cities/

The recent release of the CIA hacking tools on Wikileaks suggests that they have been looking at options for hacking smart vehicles.

19:

Let me now propose Rule 35 of the internet: "If it can be hacked, it will be".

(This almost certainly already exists under some other name, but what the hell.)

20:

How about this one:

Electric cars are almost silent, so in the future governments everywhere have passed laws requiring synthetic engine noise. Car manufacturers have responded by making the default noise resemble a sewing machine and selling upgrades to V8 turbo noise.

An ingenious hacker has modified his car's firmware with a novelty noise: his now clucks like a chicken. However he didn't realise that this major manufacturer uses a peer-to-peer mesh update system in which each car passes on software updates to nearby cars of the same make via wi-fi. His car sees the new noise as a software update and passes it on. Soon every town centre sounds like a chicken farm!

21:

I'd suggest more like "If it has a network connection it will be hacked. If it doesn't have a network connection then it might be hacked."

22:

IOT - The S stands for security.

I share the pessimistic outlook: the amount of unpatched, unsecured devices is growing exponentially.

23:

A "Honey, where are the kids?"

B "been hanging in the park after school since we paid off the CyberBullyBotnet. Great we could pay them since you created the vacancy for your promotion with ExecutiveIncomes.blurg"

A "ok let me just check NipperLocator. Hang on, that's funny ....."

24:

An interesting application which allows for a lot of insight into true connections between online entities is the new upswell in internet/app-driven board-games. I can offer links on demand, but FFG is the flagship on this one, and large enough to make a difference in the niche. This type of social interaction is massively successful in my circles, much more so than either classical board- or video games and is actively proselytizing. The apps are free and provide a forced expiration of the physical game, but further plans suggest that each player will have it's own app/avatar and use FB/Google or other avatars to log in and have consistency across games/playgroups. Imagine the potential of having a self-updating accurate map of not only online but real-world connections that the targets happily keep updated themselves. Regarding toys, RC cars with network capability and sensors sound really interesting as targets, and I know of enterprising students that have built them out of parts and sold them to others.

25:

"There will be a counter-movement of neo-puritanism, with an anti-technology mix"

the Butlerian Jihad?

26:

Ransomware made Flesh

Pretty sure Greg Egan already wrote that one.

27:

I was peripherally involved with the alternative wide-area network (X.25, Coloured Books, OSI and all that), and one of the objections to what became the Internet was that it was catastrophically unreliable and insecure by design. That was pooh-poohed on the grounds that (a) RAS and security can always be added later and (b) they weren't all that important, anyway. As any fule kno, both claims are, to be polite, complete bollocks.

There isn't any technical difficulty in building an almost-perfectly secure system, though much of the (practical) expertise on how to do so has been lost, and I am pretty certain that one could be designed to interface to the existing Internet. Obviously, any 'privileges' that were obtained from or sent to existing, insecure systems would be compromised, but that could be controlled. What there isn't, is the political will - look at how banks use their security to put the risk on the customer, rather than allowing the customer to achieve even a good 1970s level of security.

I am very out of touch with the current state of the art in this area, but I think I would have heard, so I suspect that any current secure systems (as distinct from components) are very limited and single purpose.

28:
As soon as wages stablize for software engineers -- as soon as the software engineer valuation bubble pops -- becoming a mediocre programmer will no longer be lucrative, so average competence will increase as the low end drops out and development will become slower and more careful because the total number of available developers will decrease.)

Mediocre programmers are only employed because of the shortage of better programmers. The indications are that the demand for programmers will continue to increase faster than the supply of good programmers resulting in both increasing salaries and an increasing proportion of mediocrity. Programming is the last job that will be automated after AI does everything else.

29:

Market segmentation, then regulation.

Someone is going to go to the effort of writing firmware for most of the functionalities you want networked to "Proven Correctness" standards.

... Uhm. I think I have a startup idea now. Exuse me while I go pursue my financial ruin and personal misery (I don't enjoy doing code to that standard!)

This is a pain in the neck, but it is also possible. Then high end IoT makers will license this code. Or compile it if it ends up in the intellectual commons.

Thus, some of the internet of things will be very hard indeed to crack, while anyone that buys a cheap knock-off also buys a security problem. Wait a while, and a couple of sufficiently gruesome abuses, and IoT devices not coded to Proven Correct standards will be made illegal. That doesn't mean they wont be sold - see the market for pravda knockoffs, but it will limit the extent of the problem.

Cars will almost certainly be the first examples of this : Self driving cars don't actually need very much of their computational innards to be exposed to the internet at all -Map updates, and traffic reports at most, and even for those, sane design has it trusting it's sensors over the map, and it is entirely possible - and perhaps the best design choice, to just not give self-driving cars any network capability whatsoever, and only update maps with physical dongles. Not USB standard.

30:

Of course, only small-timers will use their own phone for breaking and entering. A serious criminal looting the house of someone with real valuables will either have a cheap burner (and chuck it in the river afterwards) or use a stolen phone and gloves. (I imagine the best target to "borrow" a phone from for the purpose will be a petty criminal, since they're both less likely to report theft of their phone to the police and less likely to be believed when they tell the police they didn't do the robbery."

31:

Back in 1980, I thought that the public would rebel against the decay of RAS and ergonomics in computers and networking - how wrong I was! I suspect that the same is true today and that, in the short term, the combination of marketing and dumb customers will cause the public to regard security breaches as simply a fact of life. I.e. I side with "El", but think that the body of people will be almost everybody.

People have mentioned hacking cars. When network connected car diagnostics first came in, my immediate thought experiment was hacking (almost) every car's pratnav and radio to warn of blockages ahead and outwards after the next junction of the M25, so directing all the traffic into London. At rush hour.

There will, of course, be terrorism - which, according to the Act, includes socially dysfunctional teenagers simply causing chaos. In addition to cars, hacking services will cause some fires and other disasters, simply because the control software is not (and will not be) designed to be fail-safe. That's not just domestic services, but any infrastructure that uses a commodity thing as part of its control.

I don't see that we are going to see a massive increase in blackmail or even snooping, because that is already possible and the existing social and legal mechanisms keep it more-or-less under control. That does, however, assume that the thing manufacturers upgrade the security of their systems in the same way that the computer ones did. I am predicting the same level of system insecurity and related abuse, but different.

What we are going to see, however, is a massive increase in abuse by marketing companies and departments, because it takes the same effort to avoid that on a thing as on a conventional computer. Also, it's much easier to force someone to reply, because there isn't the option of killing the browser (or the windowing system).

The reason that I finally installed an ad blocker is not because of the bandwidth, but because of the frequency with which some sites' advertisements were taking control of my browser (or more) and stopping me from even opening a new window. Control-alt-F1 and killing the X server three times before I could get an essential page to load? That's insane. Yes, I know the reason I have trouble is that I run my browser in an unusual way, which triggers some bugs that most people rarely see, but that is to make it vastly more secure than the standard options allow. The consequences for things should be clear ....

32:

Surely I'm not the only one familiar with the classic "universal" garage door remote, which lives in the glove box and can open pretty much 2/3 of automatic garage doors, more if given the opportunity to learn the relevant pattern.

Someone I know well certainly had a lot of fun as a teenage tearaway messing with people by making their doors open again after they closed them. Translating that into housebreaking isn't a big step.

33:

25 years from now in which direction? From personal experience, it's back :-) Far more of current software development and 'improvement' than is commonly realised is because the infrastructure is malfunctioning and nobody dares to touch it because nobody really understands it. Two examples:

Some 20 years ago, I was consulted by IBM on some code and logic that nobody in the development group knew why it was there, and they wanted to rewrite. Nothing major, but ....

That applies even to TCP/IP etc. 15 years ago, I was searching for a really in-depth expert to help with a bizarre failure mode that, for various reasons, had become too serious for me to ignore. To my horror, I discovered that there were none in the UK, probably none in the rest of Europe and only four left in the USA. Note that, in this, I am talking about a level of expertise that is WAY beyond the ordinary level at which people get called expert - at the time, I was close to that myself. So there is a serious failure mode in TCP/IP where, as far as I know, I am still the world expert - and all that I know is its signature and that it is almost always extremely rare but, under some unknown circumstances, can become common enough to prevent distributed systems from working.

34:

Mathew Garrett has had a few posts about the joke that is IOT security.

Last line in one of them:

My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign.
35:

Electric cars are almost silent, so in the future governments everywhere have passed laws requiring synthetic engine noise.

Electric cars aren't silent; a big component of auto noise is from the tyres, plus wind noise. (Source: have heard a few Teslas.)

Also, some existing gas guzzler muscle cars have sound systems to replicate the big-block V8 sound of yore: because noise is a sign of energy inefficiency and modern cars are anything but.

36:

To be fair the average hybrid car in central london is terrifyingly good at sneaking up on you if you aren't paying close attention due to all the surrounding noise, the fact most of them are uber drivers and therefore homicidal maniacs is a sidenote.

I've noticed they've more recently been adding an artifical whine so pedestrians can easily hear the damn things coming, rather than relying on eyesight.

37:

Programming is the last job that will be automated after AI does everything else.

The real problem is that, after 60 years of learning, we still don't know how to turn J. Random Student into a programmer. The level of abstraction required in reasoning about software seems to defeat roughly 50% of would-be comp sci students, and they never get much beyond cut-and-paste expertise.

38:

On seeing my first "universal" remote control:

"Well, this changes everything"...

Thankyou, I'll be here all week :)

39:

Ah, no: remote-bricking of smartphones has already been a thing for a couple of years. Apple first (owners of shinies don't like the idea of random strangers pawing through their selfie collection) but it'll be ubiquitous soon. And once bricked, you can't re-enable the phone without access to the owner's authentication method, which (if they're sensible) will be 2fa based.

40:

Electric cars aren't silent; a big component of auto noise is from the tyres, plus wind noise. (Source: have heard a few Teslas.)

The problem isn't an electric car on the highway, the problem is an electric car at a stoplight. Blind person hears nothing. Car starts to move forward, but tire noise and wind noise are imperceptible until the car gets moving at a decent rate...but they've already hit the blind person before they do that. (Yes, they shouldn't go until the blind person is clear. If you can get everybody to actually pay attention to what they are doing, you are a god AICMFP.)

Dual-mode hybrids have the same problem. Common-mode hybrids, where the electric motor only supplements the gas motor, don't, because the gas motor is running when the brake is off, and the start-noise is even louder than the engine idle noise.

41:

What are the unanticipated downsides of the decay of the internet of things, combined with poor security practices and developers going bust and leaving infrastructure in place as abandonware? You probably know I've got a vivid imagination by now—what haven't I anticipated?

To date, most (consumer) networked stuff sits on a desk or in a pocket. If it stops working, it's easily sold/ recycled/ binned/ put at the back of a cupboard.

Once you literally, physically embed things in the fabric of your architecture, that gets harder and more expensive. I can see literal abandonware being a new signifier of poverty, dysfunction, elderly technophobia, and urban blight, like a front yard with a couple of rusting cars on blocks last century.

An old (in)secure drone/ courier delivery box at the front gate, with a marker pen arrow pointing to the milk crate wired to its side.

Curtains which are never opened because a firmware update bricked the mechanism.

Shoddy highrise apartment blocks condemned or gutted not because of any structural flaw, but because it isn't economic to replace the malware riddled IoT doorlocks/ thermostats/ lightswitches/ smoke alarms/ passive solar shutters etc., which were such a selling point when the buildings opened.

42:

Business idea ...

Most business lives online, e.g., sales numbers and budgets, marketing scripts, HR reviews, etc. Imagine a junior [dept] manager who's run out of ideas and decides to hack his/her major corporate rival and either steals their strategy/personnel, or realizing that stealing would be too obvious, opts for the chess route to come up with a superior competitive strategy. Alternatively, this manager could obtain such detailed knowledge of their key customers that they can create a personalized product/service for that customer. This knowledge would also help the hacking marketing manager know exactly how much to offer to get the deal ... improve profit at a lower risk. Similarly, by knowing the org chart of the client firm as well as the key people involved in decision-making, it would also be possible to come up with arguments in advance for each potential negative. Surprised this isn't a thing yet ... lots more money in B2B than consumer goods, yet the data mining and service/goods personalization seems to be solely focused on end consumers/gen pop.

Security first ...

Going forward security should be baked into any S/W or app first before anything else is layered on. Probably not that hard to do, and the source code for this security could be housed/governed by a UN-style body that the world trusts and that has sufficient budget and authority to maintain its operations, research and investigative force.

Scenario A: The core of any system is first loaded with the 10 (or n) commandments that will always be inviolable within that system. Or the app will cease to function if any attempt is made to violate these commandments. The first such commandment will be: there is only one god [owner], and only my god will be able to access my files - ever! If my god does not specifically okay updates/new apps that are unusual to history of use/preference, I will stubbornly insist on validation and confirmation of intention and I will check for and show all problems/risks ever associated with this app.

Scenario B: Progressive licensing for Internet users and IoT manufacturers. No reason that some type of exam/qualification/guarantee system can't be put in place especially if the people building and selling these systems want to pass themselves off as professionals. IMO, the IoT needs to abandon and stop romanticizing its hacker culture roots. These hackers were morally underdeveloped teenagers looking for laughs. Most have grown up by now and acquired some moral compass, i.e., social responsibility - and so should the industry they helped create.

Scenario C: Vatican City uses AI to track and monitor its priests ... finally the claim that the Pope knows everything to do with Catholic doctrine and its practice is verifiably true. [Substitute your GP for watching your diet and exercise; tax dept. for where you go during what hours re: any extraordinary income/loss; police/auto insurance/car manufacturer for driving habits and maintenance; municipality/home insurance/real estate agents for home renovations, maintenance records for integrity of house for resale or insurance claims, etc.] Because the VC effort is such as success in routing out and curbing bad behavior, this program is rolled out to all elected politicians and F500 CEOs. (Seriously - these folk chase the limelight, so why not shine it on them 24/7.)

Scenario D: Appliances/car connected to your GP and preferred retail/grocery stores, etc. - Your GP places you on a diet and exercise program and gives you an app to load into your appliances and online shopping app that will track their contents, your usage, and also send an approved shopping list for your next grocery visit. This app will also talk to your car and will self-park at a specified walking distance of your destination. Further, this app will talk to any diagnostic equipment that you have been told to use and will record/chart your progress. Since many such apps are likely to reside on your smartphone, it will also alert your GP, 911, etc. if you keel over and need help, and will also open the door to verified first-responders.

43:

Not USB standard.

Well there's no particular reason not to use the USB hardware standard on the data device side, so long the car receiving specs are designed for security. Having a funny hardware port will give no actual security but the illusion of security by obscurity might make the designers skip on actual data security. Whatever the connector is designed, hackers of every stripe will get hold of the hardware and the protocols quickly enough.

(Incidentally how many people posting to this site do not have a collection of old cables for now obsolete ports?)

But yes, the car side of the system will need extensive security protocols with key system to ensure only authorised alternations can be made. Which of course makes a manfacturers encryption key/signature store a massively valuable target.

A whole new side of the Internet of (Insecure) Things is the new Electronic Temporary Tattoos, currently using Bluetooth or NFC for connectivity. Yes, people as Internet Things. Take over the site that the data is sent to, that's e.g. your vital signs, location etc. being monitored by whoever has it. "Wackiness ensues".

44:

Mediocre programmers are only employed because of the shortage of better programmers. The indications are that the demand for programmers will continue to increase faster than the supply of good programmers resulting in both increasing salaries and an increasing proportion of mediocrity. Programming is the last job that will be automated after AI does everything else.

The lack of security is not (completely) because of bad programmers. Evcen the best one can't program securely if the otheer parts of the chain don't buy into security, starting from the managers who see security related items on the feature list and drop them lower in priority than features they can sell.

There are probably solutions for this - for example, during the perhaps five years I've looked at the Web security, frameworks and default configurations have gotten better at security. However, they do have a more unified field than embedded devices.

Security kind of needs to be thought of fór the whole life of the product.

I agree that the TCP/IP model is quite broken in regards to security. I'm just not sure what could replace it.

45:

Incidentally how many people posting to this site do not have a collection of old cables for now obsolete ports?

I have done a major dethatching in the last year, and most of them are now gone. With infinite storage space I might have kept them, but as a city dweller, I don't have that, and things need to justify themselves. Well, other than the old Tektronics 556 Sillyscope and the VT220. I mean, those aren't things, those are relics!

46:

Mediocre programmers are only employed because of the shortage of better programmers.

Mediocre and bad programmers are employed because they are vastly cheaper than good programmers, and the software and hardware industries have made it so they're not responsible for the flaws they create, so as long as people buy their products, those bad developers will keep developing to the best of their ability.

47:

Which is why physically embedding hardware in architecture is a bad idea. Fortunately, most buildings aren't going to have that, because it is (a) expensive, and(b) not matching typical construction arrangements.

Sure the fancy offices custom built for a tech company might do that for the "look at our offices of the future" prestige. But for the average house, far cheaper to just build in loads of conduit, then surface mount all the fancy stuff which the customer will pick & choose on purchase. In your standard construction, different trades work on the building at different times. So putting in conduit, the builders building the external & internal walls, and fit the conduit as part of that; later the electricians come along, run all the wires through the conduit and mount all the electronic hardware.

That way the buyer gets to pick and choose how much high tech they want/can afford, while the builder only needs to order for expensive high tech gadgets when they've already got the buyer, and hopefully the sale will be through and the money arrived before the invoice for the gadgets arrives. It is merely a beneficial side effect that in the future it can be all replaced, even down to the replacement of whatever is running inside the conduit.

On these lines, the latest Scottish Building Standards now require all new builds to include an entry point in the building for high-speed internet connections. Not any requirement for types of cable, service or anything like that, just for the right shape & size of hole to allow such services to be easily fitted or replaced without affecting the weatherproofness of the building.

48:

I can't imagine a competent divorce lawyer involved in a custody dispute who won't use this "information" to the fullest. And if said lawyer outsources the actual investigation to a "reputable" surveillance firm (that just happens to fabricate evidence for the right price, but I can't name any more than a couple more than a dozen or so that I know of that do so right now...), he/she will have plausible deniability.

Somewhat more humorously (if you have a sick sense of humor), this is the system that would have made the Peter Graves/Martin Landau IMF possible: Not the latex masks, but the knowledge (silently obtained, without the footprints of 1960s-era humint intelligence-gathering) to make an impersonation work. Even confidential business meetings are going to require multifactor authentication when done face to face! And that's assuming that all of the potential surveillance devices at the meeting site have been interdicted or shut down... I predict that rented time on windswept moors will become a Thing for multibillion-Euro merger/acquisition proposals.

49:

Push a person to psychosis through randomly disturbing their sleep by flickering devices on and off during the night.

Over history, the character of the delusions of the mentally ill shift with the culture of the day. Religious delusions give way to a belief that one is receiving orders from Napoleon, to UFO abduction and transmissions from little green men.

In the early noughties I knew a floridly schizophrenic person who was convinced he was being surveilled by hidden cameras everywhere. At the time, that was almost a bizarre delusion, in the technical psychiatric sense that it wasn't realistically possible for the malign conspiracy with him at its centre to have a camera everywhere he went. In 2017, it's almost plausible.

The idea that every flickering lightbulb or stubborn doorknob is controlled by a malign intelligence is the stuff of paranoid nightmares. And I'm sure it will be; that we'll see a shift in the nature of our delusions and fears. (Obama tapped my phone!).

When hands free devices first became commonplace, I remember everytime I walked down a crowded street I'd have the experience of thinking, there's a mentally ill person mumbling to herself; ah, no, she's just on the phone.

There's a corollary to Clarke's law here: The user of any sufficiently advanced technology is indistinguishable from a magical-thinking nutter.

50:

More broadly, the knowledge or belief that there's a little bit of intelligence in all the things around us strongly resembles animist belief systems. William Gibson was playing with that idea with the voodoo stuff in his Sprawl novels.

51:

Scenarios where everyone grew up plugged into the Internet.

Early warning/early intervention:

Based on the New Zealand study (link below), developmental (physical and psycho-social) problems can be detected by age 3. Given that almost all new parents will have some type of baby monitoring device in their baby's room and that this new generation of parents is even likelier to routinely check on their kids via home IoT linked to their smartphone/smartcar it should be possible to collect and process data on their children's development (both physiological and psychological) to identify problems much earlier. Similarly the same tech could help monitor progress and better identify what works and what doesn't for that child in that environment. All of this info could then be called up and processed by a doctor, lawyer (in a child custody dispute), or accredited social agency. Considering the cost of running good quality longitudinal studies, this scenario would actually be very cost-effective. For the kids and families, it could prevent a lot of suffering.

http://dunedinstudy.otago.ac.nz/studies/sub-studies/next-generation-study

http://dunedinstudy.otago.ac.nz/publications

The above scenario could probably also be adapted to rehab, prisons, post-deployment ... but especially to monitoring behavioral and physiological health of seniors and any patients participating in any clinical trial. [No more fudged, lost or misinterpreted data.] The emphasis on seniors is three-fold: (1) they're a growing demographic, (2) historically their health/development has been ignored as 'that's just old age, nothing we can do about it', and (3) they're increasingly falling victim to various types of crime/scams and disease/medication-linked side effects, e.g., Parkinson's Disease - gambling.

http://www.prd-journal.com/article/S1353-8020(13)00074-6/abstract

Safe-to-Post [Your IoT Conscience TM]

Built-in reader/detector of all visual and text media with an in-built sliding rating scale of 'safety' ... probably a consumer level extension of whatever Google is using to vet trolling. This type of app would be set to whatever the user's license is which would depend on age, gender, social/psychological maturity (because of varying legal permissions across jurisdictions), and legally-recognized relationship status.

Example: Baby lying on blanket (with naked bottom) pictures would be approved to keep on mom & dad's smartphones and could even be sent to the grandparents only if each of this family members' devices are set/programmed to recognize each other as members of the same social net/group. This photo however could not be sent in error or deliberately to anyone else because it would fail the safe-to-post rule. Same set-up with provocative pix sent/received by an adult couple: access to such data would be granted to only one pre-identified/family-circle registered device and for only as long as the couple is together, i.e., ends as soon as one of this couple opts out. As soon as the opt-out occurs, all access/privileges would be cancelled and revocation would include erasure of all 'private' data/images.

52:

I'm sure Daesh, pedophile rings and the like will use it to generate fake traffic against their websites in order to stymie law-enforcement and spooks trying to use traffic analysis to identify terrorists/pedophiles/etc.

53:

Also, some existing gas guzzler muscle cars have sound systems to replicate the big-block V8 sound of yore

I've seen single purpose, dedicated aftermarket car sound systems which mimic the sound of a turbo pop-off valve on gear changes.

Apparently popular with yoof wot drive the cheaper, non-turbo versions of turbo cars.

54:

members' devices are set/programmed to recognize each other as members of the same social net/group.

Ha ha yeah, right!

I take it you've never grappled with Apple or Amazon's versions of what constitutes a family for purposes of home sharing. Or Facebook's.

Human family/affinity groups are endlessly complicated and recomplicated in ways that formal logic doesn't easily capture: here's an oldie but a goodie by way of illustration: Gay marriage: the database engineering perspective.

55:

Two tier society, where the pluto/kleptocrats can afford all the latest shinies and the underclass lives in constant fear that their information appliances will turn on them.

Any attempts to update firmware not approved by the manufacturer are illegal due to global roll out of the worst parts of the DMCA via "free trade" agreements.

Planned obsolescence on overdrive. The latest iPhone isn't just a status marker, it's a survival necessity.

Mix of off-grid living hastily kludged onto a 21C information economy. You're better off using a chiller with dry ice than risking having your food poison you after your freezer was defrosted, then refrozen overnight.

Of course, there's going to be a plucky resistance, hacking and securing all the things to help the poor and oppressed. Radical Stallmanites: Free software by any means necessary.

56:

They're called banks. Give us lots of money or we'll let your economic system die.

57:

That's my point: family/personal social circle to be defined by/from the user's (not the profit-maker's) perspective. And, like with cars/booze/drugs-and-kids, parents are legally and ethically responsible for how their kids access and use IoT.

FB/AAPL/AMZN are in the money-making biz: if reducing and preventing social/psychological hurt can make them more money vs. current offerings, they'll do it. (Two of the three appear to recognize this.)

58:

My partner and I use Apple and Netflix for family sharing without issue. In Apple we just defined each other's Apple ID as the sharing account and with things that allow sharing, they're there for the other person. In Netflix, I pay for two screens and when we're apart we can both watch.

We don't have kids, so I'm not sure what the limits are on numbers for Apple, but they don't care about the fact we're both women.

59:

Scenario C: Vatican City uses AI to track and monitor its priests ... finally the claim that the Pope knows everything to do with Catholic doctrine and its practice is verifiably true.

Or an ordinary church, like the Southern Baptists for example, buys data on its members and fits out a surveillance center so they can "make sure their members don't sin." Then if you leave The One True Church they broadcast everything you've done to everyone you know. (For U.S. political reasons, this is the one that frightens me.)

60:

OK, so here's your exam question from Database Design 101:

  • Adam is a widower. He was married to Bea (deceased) with whom he had two children: Clive and Doris.

  • Eric is divorced. He was married to Fiona. Fiona has two children by a previous marriage, Greta and Hillary. Eric shares parental responsibilities for G and H with Fiona.

  • Adam and Eric are currently married.

  • Because Adam and Eric don't have enough headaches, Fiona is currently acting as a surrogate and is pregnant with Adam's child, who will be adopted by Eric.

Design a fully normalized database schema that can accommodate the aforementioned family.

Show example query to determine whether any two individual members are eligible for "home sharing" of internet content with one another (one must be married to, parent of, child of, or the actual content owner).

For added credit, show how the database schema may be generalized to encompass polyamorous marriages, divorces, and bereavements.

...

(I think this thought experiment makes it clear that most US Republican legislators are failed DBAs at heart.)

61:

"Do not have any 'internet of things' devices in your house" is already difficult. My printer keeps asking to be internet enabled. I don't have a TV, but many TVs already want to be internet enabled, and it's quite possible some won't work without it. There aren't that many manufacturers of refrigerators, stoves, and laundry appliances...you may have the choice of accepting IOT or doing without.

For that matter, my credit card has been "internet enabled" for years. Once upon a time I just used it, but suddenly using it required approval over the internet. (Well, some network, I'm guessing it's an encrypted internet connection. Which leads to the question of "How secure is the encryption?".)

FWIW, IBM has announced that they are developing a 50 qbit computer. That makes most encryption permeable. How long until the price drops?

As for the dolls...they add an additional level of problem, but smartphone apps have already gotten there. IIUC, many of them have access to the phone mic even when they aren't invoked, and they can both store content and phone home. And many of them have poor or no encryption.

The "benign neglect" is a real problem, but I don't see it as leading to any problem that isn't also present in just poorly (or maliciously) implemented software.

62:

Don't forget the recent Amazon downtime which gave us some insight into what might happen when the cloud stops reigning.

63:

Re: 'Two tier society, where the pluto/kleptocrats can afford all the latest shinies ...'

Unable to picture any two-tiered society that's made up of pluto-/kleptocrats and hordes of unwashed, uneducated, trampled-upon minions that lasts more than one generation. Who exactly keeps everything running? Who creates new gizmos? Who feeds and looks after the elite and their property - and how well? Where do these carers come from? How are they trained/evaluated? How and where do they live? What food/upkeep do they need in order to be capable of providing the care-taking? You're describing the minimum level - survive, not thrive. For a society to be healthy enough to continue developing, its constituent members also need to thrive.

Charlie once asked what the minimum number of ITs needed for a generation ship or colony would be. Don't recall what the responses were, but think that as the total population grows, you'd actually need more and more ITs - not because they were becoming inefficient due to typical hierarchy/pyramid problems, but because scaling often leads to greater differentiation which usually means not just the new/novel type of IT but also new types of interstitial ITs that would be needed to communicate between an also increasing number of IT subgroups/clusters. The new interstitial ITs are what make this type of growth appear 'inefficient'.

I think these are related problems.

64:

On the 'going equipped' theme, I recently realised I'd been doing that by accident for the last few decades (at various levels). The family home had 1950s vintage 2-lever locks on most things, and the answer to "need another key for 'x'" was to rummage through the tub of old keys to find one with an approximately correct 'bit' and keep filing the wards off until it reliably operated the lock. My keyring thus held two standard Yale keys and five different skeleton keys up until I was forced to move out. Add in working on a friends house, so wandering the streets at night with various carpentry tools and a crowbar, and it could have been very awkward if I'd been stopped. (I realised this after I'd moved house (dropping all the keys and reducing the wear on my pockets), when I idly wondered what the correct name for that particular type of crowbar was, and the search engine returned the word: "Jemmy". Oops.)

65:

I disagree. Mediocre and worse programmers are employed (as programmers) because those making the personnel decisions don't have any capability of recognizing a good programmer. There is probably also a shortage of good programmers, but that's not the basic reason for the employment of mediocre and worse programmers are employed (as programmers).

66:

Having been through the Apple and Amazon "family grouping" setup fairly recently, it works well for simple setups. All they seem to care about is whether the individuals within the group have credit cards, and their age (for limiting access to certain films/books/music etc). No gender-related questions.

The other limitations (membership of only one "family" at a time / time delays when switching between families) appear to be aimed at preventing the "I bought this item, I shared it with all my family, I've now switched to a family-group based around my flatmates, now I've shared it with them too" use case. Seems fair enough.

So; just invite everyone into a big family. There doesn't seem to be a stated limitation to the family size, presumably they have to worry about the quiverful polygamous worst case...

67:

A lot of home cinema stuff is wifi controlled now (none of mine, I promise you) and it would not astonish me if (for example) someone uploaded a program to switch video projectors on at 2 AM, or otherwise do something to use up the lamp extra rapidly when nobody's looking. The lamps often cost nearly as much as the projectors...

And there was a news story today about MI5 hacking smart TVs for that Big Brother feeling. Old news to you, I expect.

68:

Of course, there's going to be a plucky resistance, hacking and securing all the things to help the poor and oppressed. Radical Stallmanites: Free software by any means necessary.

Now I am picturing a hacker Harry Tuttle in the cyberpunk remake of Brazil

69:

Some high-end cars do that now, in reverse. Use the (very good) sound system to play engine noises inside the car, so the customer hears the sound of a powerful engine they expect to hear.

http://jalopnik.com/5973177/this-is-what-a-modern-bmw-sounds-like-when-engine-sounds-arent-piped-in-by-a-computer

I admit I like hearing engine noise when I'm driving, because I use the sound to shift gears (as opposed to the tach, which requires taking my eyes off the road).

70:

It's not just at the stoplight that you can't hear an electric car. Tyre noise doesn't become significant until over 15 mph...different people, of course, have different levels of sensitivity to sound...and on local streets I often see a silent electric car moving by. (Well, it's not really silent, but it's hidden by background noise.) 15 mph isn't very fast but it's easily fast enough to hit a kid who isn't paying attention (and they don't...they've got their eye on the basketball).

71:

Aw, thought this post would be about the under-addressed phenomenon link rot.

A PSA then!: you can and should put all the links on your sites in the Internet Archive with this wee daemon.

72:

I'm always surprised that hellhole beseiged warzones in Iraq, Syria and elsewhere have functioning mobile phone systems, but it's rumoured to be such a good source of intel for the US, Assad etc that no one wants to shut it down.

If future militaries can easily hack large numbers of old obsolete, insecure IoT devices in an area of interest that could be a useful source of battlefield intel. Anything with a sensor could be useful for surveillance. Cameras and microphones obviously, but even a thermostat might be able to tell you if a building is occupied or not. A lightbulb strobing could be a homing beacon for a missile. Speakers and screens can carry propaganda, demands for surrender etc.

I have my doubts about how well all that data would be analyzed, but it might be in the interests of a future General to ensure that an enemy area retains electricity supply.

73:

OK, so here's your exam question from Database Design 101:

I assume you are aware of this:

Gay marriage: the database engineering perspective

74:

Think we should keep in mind that most IoT tech is imported vs. manufactured locally so that first-dibs on any data collected would go to the foreign manufacturer. IOW, China could have much more dirt on USians overall than could the Russians. And, if these Chinese manufacturers bothered to conduct any analyses of captured behavior vs. category purchasing history, they should be able to come up with even better ways to further market their products to USians.

75:

Nice and easy. I can set up a single family to share baby photos with both my husband's sister and my own brothers. Now both sides also have their own babies, and don't want to share private photos with each other (because they are virtual strangers and have met 3 times). They don't want to share them to my family group, so set up their own.

76:

That's the point. The actions of a major faction of the global elite are leading directly to feudalism. You don't think such a society would work; you are more intelligent and interested in different things from our rulers. They simply don't see the failure modes you do.

I mean this is the week in which the "replacement" for the affordable care act has been rolled out, and if enacted hundreds of thousands if not millions of people will have a miserable unnecessary death, and it's getting criticised from the right for not going far enough toward a 'market' solution and not allowing selling of insurance across state lines, i.e. it isn't designed to kill enough poor people.

As for the OP, the listed problems with the internet of things are already here. I can envisage a future where there is a large segment of people who simply do not do such technology at all. I personally will not buy any IOT device and do not want a smart meter. There will be another segment who enthusiastically buy the things, but who yoyo in and out of enthusiasm as the flaws keep getting exposed.

I can certainly see any current realtionship problems, teenager 'borrowing' money problems, blackmail etc all becoming more simply common across the population because there will be too many loopholes and opportunities.
More serious problems will be variations on the issue of obsolete equipment running old code, so old smart meters will fail causing major power loss issues.
New low level criminal activity will involve walking around (incidentally getting fit!) trying to find old and open bluetooth and other signals and hoovering up what can be got, then selling it on up the criminal foodchain. So the stop and search laws will have to be amended somewhat, and the police get (if they can afford it in post-brexit britain) new mobile phone checking devices that in turn hoover up the data and check for the illegal sniffing software.

77:

"Very simply: DO NOT HAVE any internet of things items in your house."

Bang on.

While the number of posts on the internet - or even just the number that Charlie links off his twitter - that both enthuse gushingly about, and warn of the awfulness of the failure modes of (without apparent awareness of the contradiction), items like fridges and dildos that have no earthly reason for being connected to the internet but nevertheless are, is enough to make one weep for humanity, the actual prevalence of such idiotic devices offers, by contrast, some grounds for hope that a surprising number of people aren't quite that stupid. I've never encountered one of those devices outside of such a post - not even happened across one for sale while searching Amazon for something else, let alone seen actual hardware, nor have I noticed anyone outside the authors of such posts evincing any enthusiasm for the idea. I think such devices appeal to a particular class of people, the kind who buy "PC weather stations" in preference to looking out the window, while the majority of folks respond along the lines of "just why the sweet fuck would I want to connect my toaster to the internet?"

So really the problem solves itself just by being left alone: let the shit hit the fan, the sooner the better, before people start buying these things not because they actually want the functionality but purely for dick-length competitions with the people next door. Let the bad publicity become widespread while it can still be a storm in a teacup, so that it creates the general perception that these things aren't just a supremely pointless idea, but a definitely bad one. (Even better if the fuckup forms some irresistible parallel to an episode of Only Fools and Horses.) If people become less likely to buy a washing machine because it has an internet connection, then they won't be made with one - this is after all an area where production engineers say not to bother with a washer on that bolt so as to save 1 penny every 50 units.

Cars, at least, definitely will have to conform to legal standards for security, because cars are generally quite good at killing people and anything important about them gets regulated. It just needs the lawmakers to recognise the need, so again the sooner some cockup reaches the mass consciousness the better.

78:

Imagine how demoralizing it would be for a soldier on a future urban battlefield if every espresso machine and tickle-me-elmo he sees tells him he's going to die. By name.

(Which reminds me of Michael Marshall Smith's roving bands of feral coffee machines).

79:

Yes. On my street there is rarely more than one car moving at a time; when that car is a Prius running with the engine off you just can't hear the thing.

(Artificial engine sounds - laugh - invented by me in the 80s to make a B20E sound like a 16SVT, but not implemented because it would have needed a huge and expensive rack; reinvented by Lotus some time later to make their little whizzy engines sound like big rumbly ones; now it seems every bugger's at it.)

80:

I think there's some extra fun to be had when you throw augmented reality into the mix. All those IoT devices are going to be WiFi enabled, right? With ransomware and unethical marketing spewing out of obsolete devices and straight onto your visual field, in the cities of the future it may be more relevant to note that you're never more than twenty feet from a pwned IoT device than to note that you're never more than twenty feet from a rat.

It could give rise to a new type of low end maintenance job as well, street cleaners who walk around triangulating and deactivating old IoT devices to prevent them from spewing their filth all over the nice parts of town.

81:

"And The Dish Ran Away With The Spoon" by Paul Di Filippo:

Facing my rival that fateful afternoon, I finally realized I was truly about to lose my girlfriend Cody.

Lose her to a spontaneous assemblage of information.

The information was embedded in an Aeron chair mated with several other objects: a Cuisinart, an autonomous vacuum cleaner with numerous interchangeable attachments, an iPod, and a diagnostic and therapeutic home medical tool known as a LifeQuilt.

...

82:

Yes, that's a nice story. Even today, a lot of people feel that their partner's smartphone has displaced them in their partner's affections, with good reason.

What Greg Tingey and Pigeon are missing is that we increasingly don't have the option. It is getting increasingly hard to get equipment that doesn't operate wirelessly, or at least has the 'option' to use it - and it's often not YOUR option, and is sometimes enabled but not documented. In a few parts of the UK, you can't get an electricity supply that doesn't use a 'smart meter', and there have been proposals to do the same for water supply and waste disposal.

83:

A lot of the push for wireless in the UK has been driven by the difficulty of retrofitting a legacy housing stock with suitable cables. Loxone for example has two ranges, a wired normal one for new builds and a mesh wireless one for old properties that can't be rewired.

About the only home automation I can think of that isn't going wireless is Crestron, and that's because the kind of person who can afford it is likely to easily pay for the wires to be hidden properly. Even some blind controllers are going wireless.

Smart meters and Internet TVs are the tiniest baby steps in home automation.

84:

There's already been a successful attempt to cause an epileptic attack over the internet. Kurt Eichenwald, a reporter for Newsweek and someone with publicly acknowledged epilepsy, was hit by a flashing gif embedded in a tweet. (He's since disabled autoplay for his Twitter account.)

85:

That's an important point, we aren't getting a choice about all this. Between desire to microbill us for everything (coming soon, oxygen pricing per litre, with a discount for when you are exercising), and the drive to find something to do with all the technology so as to make money*, we're being pushed into it without a say. See also facebook and twitter changing how they present things to us despite us being happy with how things are.

  • look at the ring end of the wearable tech market. Some are crowd funded, a great way of losing your savings. Others are definitely high end and focused, which makes more sense, but at the end of the day do they really do things better than what we have already?
86:

Artificial engine sounds have been around in science fiction since at least 1975. Follow this handy ISFDB link to have the short story in question spoiled for you.

87:

Xylem: not only was I aware of that, I linked to it earlier in these comments!

88:

oh, what about Cat Rambo's "red in tooth and cog", where the abandoned IoT appliances don't lie down and die as easily as we'd like.

89:

I imagine you're aware that Internet Of Things devices are already being hijacked for DDOS attacks?

https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

Not exactly a book-worthy plot, but it makes the point that ANY unsecured device with an Internet connection is already a public nuisance, even before considering any sensors or manipulators it might have in the real world.

90:

I actually like the raspy little voice of the turbo six, though more car manufacturers should take a page from Horacio Pagani who decided that "dammit, if I'm going to make a turbo car, I'm going to play with the sounds of the intakes and blowoff valves until they sound suitable for a car named after a wind god!" to paraphrase slightly.

Now, hacking/decay stuff?

Hmmm, bad programmers are a problem, what about bad git maintainers and good programmers with bad intentions? You'd think nobody would maintain a git without being aware enough of what is being pushed upstream to know how to watch out for obvious problems and potentially exploitable code being inserted deliberately, but think about the scale of what that expertise would require!

Possibly related, but not requiring said git-hacking, officers get a call to investigate a mysterious death, they arrive on scene and find an older man laying in bed with his own artificial arm clamped on his throat and punching off the bloodflow to his brain...

91:

The end game is when AI takes off and inhabits this horribly insecure Internet of Things. (Yeah yeah AI won't happen. I'm speculating here.)

At that point we get the post-Utopian world of Michael Swanwick's Darger and Surplus stories, where humanity relies on mechanical or biological technology and nobody in their right mind would try to operate an electronic device because it might have a network connection.

As portrayed in "The Dog Said Bow-Wow" it's a quite different world to the usual cyber-whatsit future.

92:

Erm... s/punching/pinching

93:

See, for example, John Varley's "Press Enter".

94:

https://www.wired.com/2015/12/2015-the-year-the-internet-of-things-got-hacked/

Medical devices.

it makes sense to allow medical devices like pacemakers to be reconfigured. After all, a doctor might need to change the settings slightly to suit a patient. Unfortunately security is not a strong point of the medical device industry.

The most trivial possible use for this is to simply kill someone but as more esoteric medical devices start to get implanted into people I'm thinking there's potential for slightly more subtle hacks.

Say you're doing a big business deal. You could just kill the person negotiating for the other company but then they'd just replace her. Alternatively she might have an artificial heart, liver, kidney or pancreas.

In a hundred million dollar negotiation it may very well be economic to hire someone who can give you a remote control for some of her medical implants so you can drop her blood sugar to mess with her mood at the right time.

Much more trivial: stick enough random out of date wifi connected programmable devices in a house and the potential for creepout multiplies.

I'm thinking some high quality grief-ware that infects your LAN and tries to find a set of insecure devices(something viruses already do) with cameras, microphones and speakers,(a little more work) runs some image recognition to build up a map of your house(high end but there's already software which will take a set of photos from a location and try to stich them together into a 30 shape) and maybe even where you are at any given time.

then when light levels are low(easy) and the vibration sensors in something near your bed(already an app for a phone on a nightstand for sleep tracking) indicate you're not quite asleep..... start playing some barely audible sounds from some speakers in the room. Think creepy horror movie intro soundtrack fodder. Faint sobbing, skittering, chittering, hissing. If a scan of the targets social network profiles indicate any phobias maybe customise.

The moment the person sits up or turns on the light, pick that up through the vibration sensors or cameras and the sound stops.

Any particular element here isn't particularly high-end,I could imagine someone creating something like this purely for malicious shits and giggles.

Basically malware which fucks with your head and tries to make you think your house is haunted or that the walls are full of rats.

95:

Doesn't seem terribly difficult if you only try to track the information necessary for your requested query. Is there an assumption that it also has to track all the other information from the story?

  • Table of People.

  • Many-to-many relationship table for parent/child. (Irreflexive; no exact duplicate entries.)

  • Similar relationship table for marriage. (You could limit it to marriages of exactly 2 individuals, but you imply it would be preferable not to.)

  • 2 and 3 could possibly be combined into a single table with a "relationship type" attribute.

    Rely on the operators to tell you whenever a relevant relationship is formed or broken. (The database doesn't care why. No relationship implies the existence or non-existence of any other relationship. Every relationship is mutable.)

    Obviously you'd want some extra conveniences on top of that in a real application, but I think that's everything necessary to enforce your requested sharing rules.

    96:

    Reasonably designed LED and fluorescent lights have a flicker rate that's fast enough to be outside the detectable range, but faults can result in a consciously detectable or subconsciously detectable flicker. Infect IoT lights with malware that simulates such faults, allowing a programmable flicker, and you can set up a varying progression of flicker until you identify the flicker frequency that triggers an epileptic seizure in a person. Said people may not even realize they're susceptible but the hacked IoT lamps could run through enough variables to uncover hidden problems.

    Take it a step further - some IoT lamps are now also incorporating blue-tooth speakers, allowing the malware to have both auditory and visual components. Couple varying flicker rates with sounds that are outside conscious detection but still perceived by the brain, and you've got a recipe for severe brain reprogramming with potential for lots of mental damage, and quite possibly physical degeneration.

    97:

    You might also need to audit that data, so deleting it is probably a bad idea. That's why most designs have end dates rather than just deleting rows. It also solves the problem of "what do you mean I have no mother" that deleting rows causes. But almost every relationship will need start and end dates (eg, if I adopt a child who is later de-adopted).

    For that reason marriage becomes a lot more complex, and realistically if you allow "relationships in the nature of marriage" rather than requiring government and church paperwork, you end up with quite complex situations (eg, my girlfriend is not married to my wife, but my boyfriend is).

    So, should my ex-wife still be able to watch movies I've purchased, or is that limited to only those I'm in relationships with right now, or even just the subset I live with? Does anyone I live with count as part of my family, even lodgers/housemates/roommates (WTF merkins, roommates you don't share a room with?)

    98:

    "what haven't I anticipated?"

    Dial-up modems and service providers make a big comeback once a marketing campaign is begun to boycott wireless broadband subscriptions: "YOU pay for wireless, it's YOUR information, even if it has no value to you, any of this woolgathering of data collected by the internet of things, obviously it's worth something to someone, or why would they collect it? If it's no good to you, it must be worth something to them, so why do YOU end up paying for the wireless to send it their way? Do without it, until it's FREE, or better still make THEM PAY YOU to use it. Join us at It's 1996 Again, learn how privacy used to feel all over again. Or get paid not to!" Once people rediscover how painless it is to save hundreds monthly with a ten dollar modem service, the whole business model of IoT will implode like a dropped CRT.

    99:

    There's a niche market waiting for dollmakers in Germany.

    Specifically, dollmakers who can tailor doll-sized uniforms from the Ministerium für Staatssicherheit - the Stasi.

    It's illegal, in Germany, to have a concealed listening device and at least one web-enabled talking toy has fallen foul of this, facing prosecution from the authorities while owners (or rather, their parents) are being ordered to disable or destroy the devices, which monitor all conversations in the room and send them to a server.

    But it's legal, in Germany, if the listening device is openly declared.

    So: cute little MfS uniforms for dolls and teddy bears, and every German child's first sentence will be 'Have Mutti and Vati demonstrated their loyalty to the Workers State today?'

    If that ever became law in the UK or Scotland - and, sadly, I doubt that we will ever have laws forbidding covert surveillance - there would be a market for a facial reconfiguration add-on pack that gives your internet-enabled toys the reassuringly maternal features of Theresa May.

    100:

    Ok, it's early but you store:

    Person Table: pID, first, last, (other pertinent information for your store).

    Family Table fID, pID, (other pertinent information for your store).

    I can't wrap my head around the necessary query to disseminate to everyone in the family in one, but you do fIDtoshare=Select fID from familytable where pID = pID from persontable; Select pID(s) from familytable where fID=fIDto_share;

    How fussy you want to get about things like polyamory, being in multiple families and so on is then down to your store policy choices. That structure doesn't forbid anything. You could require pID to be unique in the family table so you can only be in one family, but I'm clearly in two - my parents are alive and my partner is alive, so I'm in the family of my birth and the family of my adult romantic entanglement. My birth family is simple, but I could have step-siblings on both sides and birth parents and their second marriages and so on. It's easy to describe why you could be in multiple sharing families.

    I think it's easiest to say "let the family grow as ramify as much as they desire, but when you sign up for family sharing, send every member (or every adult member) a reminder every year (so you'd need to store a "date created") asking if they still wish to share with all these people. Netflix take a slightly different model, because you subscribe, and every extra body, you pay a bit extra to have them in your "family" which encourages you to prune them if things go sour.

    101:

    I already linked this elsewhere, but to repeat relevantly: Feudalism? Unfortunately, yes. As you say it's incredibly stupid, whilst being "clever" ... until it inevitably crashes, I suppose

    102:

    That's the problem - I am well aware of this utter insanity. The only hope is to try, as far as possible to either disable or shield or turn off the IoT "connections" as soon as you possibly can. Your fucking privatised utility that insists you have a "smart" meter are going to be stuffed if you put a grounded Faraday cage around the thing ON YOUR PROPERTY, so there's nothing they can do about it ...

    Mayhem @ 83 the difficulty of retrofitting a legacy housing stock with suitable cables No comprende senor?

    103:

    OK, so here's your exam question from Database Design 101: * Adam is a widower. He was married to Bea (deceased) with whom he had two children: Clive and Doris. * Eric is divorced. He was married to Fiona. Fiona has two children by a previous marriage, Greta and Hillary. Eric shares parental responsibilities for G and H with Fiona. * Adam and Eric are currently married. * Because Adam and Eric don't have enough headaches, Fiona is currently acting as a surrogate and is pregnant with Adam's child, who will be adopted by Eric.

    Subsequently, because of [problems] Helen, who may or may not be a relative of A, B, E, F, or [F's former partner], is declared an 'additional guardian' of one or more of C, D, G, H or [unborn]. But also, what about Imra, who has been flatting with A & E for the past five years? (I work in an area touching on family law, stuff gets interesting)

    104:

    Not so much "missing" as "don't agree". True of printers, perhaps; plugging inkjet printer into Amazon gave me a page with 19 listings of which 17 had wireless connectivity. But the two that didn't weren't exactly hard to find; they were there, on the first page, they were perfectly respectable, and they were also the cheapest.

    Doing the same test with flatbed scanner, though, I got 17 listings of which none had wireless connectivity. Ones that do have it do exist, but they don't show up on the first level. And that's still in the computer area where you might expect to be finding wireless connectivity.

    And in the wider area of non-computer things - kettles, dildos, utterly barmy garbage like internet-connected light bulbs that have loudspeakers in (who even comes up with this crap?) - wireless connectivity is definitely the exception and not the rule.

    105:

    Re smartmeter/Faraday: Yes there is: ultimately they'll cut someone off - terms & conditions etc (interfering with meter/supply). Granted, any test case in the Courts will be interesting.

    106:

    Firstly, down to 2 out of 19 IS "increasingly hard"! And it is quite likely that those two will also omit some other features that some people need. It's not yet virtually impossible, but just you wait.

    Secondly, I said "operate wirelessly, or at least has the 'option'", not "wireless connectivity". We have a central-heating controller like that, there are doorbells, and so on. And, for some kit, a maintenance engineer can connect wirelessly but the user can't.

    Thirdly, are you sure that they don't have it physically installed but not documented or configured? It's extremely common to use the same controller across a range, with differing numbers of features enabled, and there have been cases in the past where such 'hidden' features have been enabled, with 'interesting' consequences.

    Greg Tingey #102, last paragraph: pre-stressed concrete (or, occasionally, stone). We had that problem at work, badly.

    107:

    Except stand-alone scanners are a dying breed. The current multifunction devices have replaced most of them, unless you want negative/slide scanning ability (and even then I'd go with a dedicated machine - the specs are overrated to a Trump-like degree). I was recently looking for a replacement to my 12yo(?) A3 scanner - scanners started at NZ$1,000, multifunction devices approx NZ$300. Most with SD/USB stick/wireless connectivity.

    108:

    Washing machine jailbreak services?

    109:

    Ahem: if you need document scanning rather than photo/negative/slide scanning, you might want to look at the Fujitsu ScanSnap range. I swear by mine — an ix500. Duplex scanning at up to 20 pages per minute with OCR and PDF or Word docx output, PDFs tagged with keywords. Wifi so I don't need it to be physically cabled to the machine its storing page images on: you can stick it on your desk but have it splat the output at a file server elsewhere on your LAN. And it's built like a brick shithouse: a few months back I scanned roughly 12,000 pages of manuscripts and page proofs in my spare time over the course of a week, and the only paper jams were due to sheets of paper that had been mangled by the rubber bands the production editors had used to hold the bundles together.

    Mind you, this isn't a toy. It came on the market at £600; I was lucky enough to snap up a Prime Day bargain for £270. And this model maxes out at US Legal rather than going up to A3; A3 is generally a minority/professional interest and priced accordingly.

    110:

    I can assure you that the manufacturers will make it more expensive to avoid their snooping. Supermarkets have already done that, with their 'loyalty' cards. Even today, if you want to ensure that you aren't being snooped ('to improve our service'), you have to build your own firewall - COTS modem/routers will not do what you want.

    111:

    Incidentally, the malware potential for a ScanSnap ix500?

    • Might be possible to write malware that will use the scanner as a server for attacking other machines on the local network (ie botnet)

    • Might be possible to send copies of scanned documents to a host over the public internet (ie snooping)

    • If linked to an EverNote account, might be possible to compromise the EverNote account from the scanner

    • Might be possible to burn out the motor/light bar/make annoying noises

    But to get malware onto the ScanSnap in the first place, you've got only three plausible mechanisms: plug it into your computer via USB cable and run a signed updater from Fujitsu, have a wireless attack launched from another machine on your LAN that's already compromised, or feed it a page that triggers some sort of stack-smashing error in its imaging hardware (highly unlikely).

    So as IoT gadgets go, I think the wireless document scanner isn't much of a problem (especially as I keep it switched off and unplugged when I'm not, like, scanning thousands of pages).

    112:

    Many smartmeter contracts require you to provide wireless connectivity for the meter. They don't need to prosecute you - they can simply cut you off.

    113:

    Ref the "silent car" - This was a thing back in the 1970s; a Jaguar XJ12 (includes E-type series III, XJ12C and Daimler Double 6) on the standard Dunlop tyres and cruising at urban speeds, say 30mph (50kph), made next to no noise other than a little tyre swoosh.

    114:

    Oh, really? :-)

    http://www.caribflame.com/2017/03/cia-mi5-turned-samsung-tvs-into-spying-devices-even-when-switched-off//

    How do you know that there isn't a juicy capacitor somewhere in the power supply that keeps the snooping alive even when the mains power is off? I shall now go and polish my tinfoil hat (they work SO much better when shiny).

    115:

    If the loyalty card gives you 5% of your spend back as shopping vouchers? I mean there's nothing I actually can buy from said supermarket that I object to them knowing about and 5% of my total spend there is like getting 2.5 weeks shopping free every year!

    116:

    Tyre noise is very dependent on the tyre material and tread and the road surface; a smooth-tyred bicycle on a smooth road is almost inaudible at 25 MPH, but a knobbly-tyred one on rough tarmac can be quite loud at 10 MPH.

    But PLEASE remember that quite a lot of people (especially the elderly) are partially deaf. I can't hear most cars behind me even at 40 MPH.

    117:

    A juicy capacitor in the external PSU that is physically unplugged from the scanner? Or one inside the scanner that can keep snooping on the house wifi after it's been unplugged for a couple of weeks?

    Y'know, if the Five Eyes had such a capacitor technology, the world would be a very different place. (See also: Robert Heinlein, "shipstones".)

    118:

    I agree. When I spoke of court proceedings I was thinking of someone trying to challenge the gas/power company. If you intentionally make the smartmeter unreadable, they can remove it, or even cut you off at the street if they can't get access. This has always been the case (UK at least, YMMV elsewhere). The ways of attempting to bypass paying for your gas over the years are many and interesting (or just bloody dangerous), as any Transco engineer will tell you over a pint or two.

    119:

    Even a few hours would be enough to be 'useful', by catching people who thought that unplugging everything (or event turning the house supply off) would protect them. But, as I think I implied, expecting such features is well into tin-hat terrritory. What is NOT, would be the ability of the spooks to enter the house of a suspect, and install such an extra - or replace one commodity item by an apparently identical one with the extra feature.

    120:

    Your purchase record is valuable info. Is it worth more than the 5% rebate is a different question.
    But Target was doing sophisticated analysis back in 2002 on this sort of data, to the point of knowing when someone was pregnant almost before they did. I can only imagine the level of analysis they can do these days. In the UK, Tesco was famously more savagely competitive than Walmart, and that's not a sentence you see often. This sort of thing is huge, and vouchers are the tip of the iceberg.

    121:

    As the (very happy) owner of a Nissan Leaf I can confirm that the silent electric car issue is very real and at times quite worrying. One of my hobbies is playing guitar with a quite busy local rock covers band, this quite often means that I'm navigating the nightlife hot-spots of local towns at peak pub/club throwing out time and the incidence of people stepping straight out in front of my car is significantly higher than when I've used (by IC standards quiet) conventional SAAB, Audi, Mercedes, or Rover products for the same job in the same places.

    It turns out that the difference between very quiet indeed (like, say a Mercedes S-Class) and to all intents and purposes silent (the handbook says my car has a low speed noise generator but I can't say I notice it at all!) produces a step-change in awareness and/or reaction to the presence of a vehicle and if you spend a lot of time in areas with heavy incidences of potentially distracted pedestrian/cyclist traffic it requires a lot more observation and a much more cautious approach or sooner or later you're going to end up hurting somebody...

    On the other hand I do feel much less conspicuous when making a ham-fisted job of reversing into my driveway at 3:00 AM post-gig. :-)

    122:

    That myth again.

    Target knew someone was pregnant before her parents did, which is not the same thing.

    123:

    I saw your tweets about your massive scanning binge and wondered what your workflow was - scan to OCR'd doc/pdf? and how to tag/organise the scanned docs? I thought it would have made an interesting post (still do).

    I've got an HP Officejet with a 30 sheet duplex ADF which takes care of A4 stuff - not at your level of use though. The A3 (a Microtek 9700xl) I mainly use for scanning old mags/comics (1920s-40s issues of Chips, Comic Cuts, etc), which ADF isn't an option for. It is SLOOOOW at 200dpi - pre-LED tech. Yes, A3 is a minor use case, but the current A3 multifunctions have definitely replaced the standalone machines at under 1/3 the price. Mind you, the ix500 isn't flatbed either, which is what Pidgeon originally referenced. I also swear by Vuescan's software for keeping older scanners working under more recent OS's.

    Agree, not sure what good/effect hacking a networked scanner could have, as compare with the recent printer hack.

    As for slides/negs, any flatbed that advertises better than 1600 dpi is probably lying through their teeth, if the tests from filmscanner.info are to be believed (they test using a glass 1951 USAF test target).

    124:

    Keyword in that sentence was almost, but yes I was exaggerating for effect.

    I imagine today though that between Google, Amazon and Tesco you probably could have a pretty good strike rate for someone who regularly used your services. And pregnancy is the least of it, they also picked up on the life changes like promotions, marriage, divorce, kids etc that shifted buying habits without people necessarily consciously thinking about it. Carefully tailored marketing of specific products to a receptive audience is the grail of this sort of data, because it can actually provide you that audience.

    125:

    It's the grail, but right now the implementation is lacking.

    "I see you just bought a new washing machine. People who bought washing machines also looked at washing machines. How would you like to buy a nice new washing machine?"

    126:

    Software security is not a new issue; I published my first article warning people how to protect themselves from viruses and trojans nearly 30 years ago, and I wasn't remotely the first person writing about this. Breaches of password databases aren't much younger; large-scale breaches date back to the early 1990s, if memory serves. Nowadays, it's hard to open a newspaper, watch a TV news show, or read an e-bulletin without reading about one or more of a crop of malware running from the exotic to the mundane or about hacked corporate databases. The problem is not one of awareness.

    Shipping any modern "connected" product without a feature that requires the user to change the password as soon as the thing is turned on is gross negligence. Failing to aggressively protect any data you harvest is equally gross negligence. Sufficiently determined hackers will get you, one way or another, even if you do work hard to protect your connections and the data traveling over them, but there are clear best practices that are being widely ignored because they're expensive or time-consuming -- or simply inconvenient.

    I'd like to say that programmers and engineers are aware of the problem and working hard to stop it, but the evidence suggests otherwise. As a profession, these people need to up their game. The work they do is insanely complex, so there are no guarantees, but they're a gifted group of people. If anyone can solve or at least drastically mitigate the problem, they can.

    But they need to be given time and resources to do so. I'm not the first one to note that modern companies are the functional equivalent of human sociopaths; they have no investment in the safety or success of their staff or customers in the absence of strong legal remedies that are strongly enforced. In particular, it's long past time to dispense with the notion of a "corporate entity". Corporate leaders need to be hauled, screaming and kicking, out from behind the "corporate veil" so that they are held responsible for the actions of their companies -- and those who exercised due diligence will be just as safe as those who dwell behind the corporate veil. There are no guarantees that they'll succeed, but the fear of legal action will at least make them try. And that's the minimum we should ask of them.

    127:

    Please don't be silly. Firstly, they already target people with commonly-associated devices, like tumble driers and maintenance contracts. And, secondly, you put lots of different data together to detect when (say) someone has just bought a house, and target them with what other products new house-owners are likely to buy. The techniques are many decades old, the implementation is already here, and they are limited primarily by the data and secondarily by the data laws.

    128:

    "Breaches of password databases aren't much younger; large-scale breaches date back to the early 1990s, if memory serves."

    1960s, possibly even 1950s. They weren't as large-scale, but that's only because the databases were smaller then. Roger Needham invented password encryption because it was needed, not to counter an imaginary threat. Otherwise, I agree.

    129:

    Amazon keeps recommending books to me by some guy named "Stross". Can't think why.

    130:

    Oh yes?

    Is he any good then?

    131:

    And as it turned out encryption didn't stop dictionary attacks -- the file holding passwords was world-readable in Unix, the algorithm used to encrypt words was known, it was easy to run dictionaries and lists of common words, names and fiendishly obfuscated candidates (with digit substitutions for O and I) through the encryption process to build a table of millions of possibilities and look for matches.

    Security is an continuing process, not a solved problem and this is true of IoT devices as much as banking systems. The worrying thing is that the IoT designers are evolving up a path already taken by previous developers and not catching up quite as fast as they should. In their defence it's a rare IoT controller that has a secure crypto module built-in to the silicon although they are getting there. An Arduino with a Wifi or an Ethernet "hat" is wide-open electronically speaking as are many off-the-shelf devices.

    132:

    I'm well aware of that; it's why I specified the tyre make (and by implication model) as well as the car make and engine.

    133:

    Hey, if Bannon and his horde of lardy old rotten sacks of bastard want to gear up and go play crusader... let 'em!

    Hell, we should assist, nay, encourage, even require this. Yes, slap some mail and a great helm on those creaky disgusting frames, give them a shield and sword, and leave them in the desert... not particularly concerned which desert, but might I suggest Antarctica?

    Back to my murder mystery, a prosthetic limb could be updated with improved control software, linked to storage and display devices, and various other reasons why it might be connected.

    Though perhaps hacking one to choke someone to death is a bit out there, causing a knee or ankle joint to lock up or flex strongly suddenly, perhaps while an eerily silent electric car cruises up... you get the idea.

    134:

    I don't buy all my groceries there; I don't buy all brand names or all own brands. I don't buy all ingredients or all ready meals. I buy next to none of my stationery, periodicals... there.

    I doubt you can get much more than which is my preferred shopping day from it.

    A bit like how "Large River" keep trying to sell me Charlene Stross books. ;-)

    135:

    1970s; I'm a couple of years older than Charlie.

    136:

    The Titan (from where Unix got many of its concepts, including this one) didn't make that mistake, as far as I can recall; remember than Unix was designed for use by a single, cooperating development team (i.e. one for each system), and security against malice (rather than accident) wasn't really part of its design remit. As soon as Unix was used more widely, the need to hide the encrypted form became apparent (hence shadow password files). That was adequate until the use of multiple computers became common (and that WAS the 1990s), with the tendency for users to use a single password for all the systems they use.

    137:

    Do you think they would stop if they knew you get to read them all for free? :)

    @127: Silly? I see something similar every time I buy something from Amazon. Maybe they just think I have poor impulse control and need 20 cameras.

    138:

    It's more complicated than that.

    1) Mediocre programmers can only be distinguished from good programmers by good programmers. (Degrees and certifications are next to useless when it comes to determining competence beyond a very low level.) Because even mediocre programmers are paid absurdly high wages, nobody is really willing to staff HR positions entirely with programmers, and putting programmers in charge of even the tail end of the interview process is pretty rare.

    2) Mediocre programmers vastly outnumber good programmers because the amount of effort and resources it takes to barely pass a CS degree program is much smaller than the amount of effort and resources it takes to become competent. Because of high wages, mediocre programmers flood all markets, and even programmers who are known to be unskilled can expect their starting salary to be much higher than that of equally technically skilled people in slightly different fields.

    3) Wages are high because demand is high, because mediocre programmers mostly produce problems that need to be fixed by other programmers, and because major employers are being funded by what amounts to gambling whales (VCs and the stock market) rather than by actual profit, the mechanisms for distinguishing between good work and bad work aren't optimized. Really dysfunctional companies tend to eventually go out of business, but in the short term the PR and charisma of a handful of figureheads matters a lot more than technical feasibility and competence, and by the time someone's incompetence is noticed up the chain (which takes a while since the average level of competence even in the pool of software engineers is low and the average level of software engineering competence in management is lower) they can expect to have already moved on to another company. It's like a pyramid scheme in reverse.

    That said, flooding the market with crappy code and bringing on crappy developers to fix the crappy code (but who end up actually just making it worse) is expensive and wasteful. I suspect that various places will increase quality standards, improve hiring practices, and prevent some of the worst low-hanging-fruit of scammy behavior with regard to poor technical candidates systematically, and will then start to out-compete other players. This sort of depends on VCs going broke or becoming more careful, because all the over-investment going on in the industry is preventing companies from failing on the merits of their products. (Something like UBI might fix this, because VC cash would no longer be the only game in town -- it would be possible to start a small business in the sector without courting rich old men and moving into the world's most expensive walk-in closet, and the norms would begin to change.)

    Once that happens, the bubble pops, good software engineers start getting paid like good civil engineers instead of like movie stars, crappy software engineers start getting paid like fast food workers, and the number of crappy software engineers making important decisions drops, resulting in better quality software all around.

    139:

    Personally, I'm glad that current web advertising-related AIs only start showing me ads after vs. before I need to look for a product/service. Not ready for an IoT/etail AI that anticipates my needs that well. (Re-roofed last fall after considerable online research on roofing materials, methods and contractors. Have been seeing ads for roofs ever since. The new roof is rated 40 years so some time to go before this ad strategy pays off.)

    140:

    Have wondered why this industry hasn't created an AI to specifically check for broken or bad code. Can't be that many different programming languages in current use within the online universe. And, because there are really two major consumer S/W options (MSFT, AAPL) even an AI that checked broken code only in these two languages would reduce a lot of pain. The AI needn't be able to fix every problem, only identify where it occurs or what-all any new bit of code would change in the existing S/W.

    141:

    And if you change supplier, you HAVE to have a "smart" meter.

    Question: how long before this total fucking scam crashes, with peoples power cut off &/or astronomical bills, etc etc. And how, after the initial denials & claims that everything is perfect" will "they get out of it?

    142:

    Have wondered why this industry hasn't created an AI to specifically check for broken or bad code.

    Because of the halting problem. Which is why programmers will be last out the door and turn off the lights when work is taken over by the machines.

    143:

    OK, I don't have a "home interweb thing", and I recently (months ago) got replacement meters which aren't "smart". So it's clearly still possible to get them.

    144:

    2) Mediocre programmers vastly outnumber good programmers because the amount of effort and resources it takes to barely pass a CS degree program is much smaller than the amount of effort and resources it takes to become competent.

    Recent surveys show that less than half those working as developers even have any CS related degree at all. CS is also the subject with the highest drop-out rate because it is

    (a) Hard and

    (b) Not many people have a talent for it.

    This doesn't indicate that either salaries or shortages are going to decrease.

    145:

    The halting problem, but also the difficulty of knowing that the code is broken or bad. Which requires the checker to have a specification of what the code is meant to do, and the coder to have written one — correctly.

    146:

    And also not everyone working in CS actually has a degree, not even the ones who've been told that they're "good" by people who quite definitely are!

    147:

    Do you have an evil twin?

    148:

    The worrying thing is that the IoT designers are evolving up a path already taken by previous developers and not catching up quite as fast as they should.

    The sad thing is that so many of these people are running embedded Linux, which has extremely powerful security available, and their not using it! WTF?

    149:

    What makes you think you'd need an AI to do that?

    Finding bad or broken code can in fact mostly be done mechanically -- and there are a lot of tools to do that. Including the compilers that both Microsoft and Apple use. Of course, the bug checkers also have bugs. Where it falls down is determining intent -- that an if condition is using the wrong variable, for example.

    Also: Don't use C for applications.

    150:

    Things that can detect some types of bad, or at least dubious, code do exist, though they aren't perfect (and can't be). However, this requires willingness to spend the time and money setting one up, learning to use it, and actually fix the bugs uncovered.

    There are commercial and open-source tools for this, working either by reviewing source code for potential defects, or monitoring a running program for defective behavior. Neither is simple to write, must be significantly modified or completely rewritten to work with a new language or platform (there's way more than two of each in common use), and requires the user to understand what they're saying.

    Also of relevance re security are fuzzers, which actively try to make a program crash by starting with some input and mangling it until a crash happens.

    151:

    There is also automated testing for security vulnerabilities, though I don't know how sophisticated it is. But even as an amateur coder I know enough to sanitize and validate my inputs.

    152:

    I think the main enemy of good code may not be bad coders so much as boredom. Input validation is boring and it sucks!

    153:

    Don't use C for applications

    Oi, I resemble that remark! (He says, with over twenty-five years of my career C/C++ coding, and a firm belief that Kernighan, Ritchie, and Stroustrup are a fairly Holy Trinity).

    Which reminds me - a software test engineer walks into a bar, orders a beer. Orders zero beers. Orders -1 beers. Orders 65536 beers. Orders a lizard. Orders a sdlkfjghdk.

    154:

    Not having IoT items in your home is a good start but unfortunately even I sometimes leave my house in which case one might discover that, to make up an example at random, a hotel might have replaced light switches with Android tablets with hilarious consequences.

    155:

    This sounds familiar. Ah yes, Steward Brand (1999) Clock of the Long Now. "We are in the process of building one vast global computer ("The network is the computer" proclaims Sun Microsystems.) This world computer could easily become the Legacy System from Hell that holds civilization hostage. The system doesn't really work, it can't be fixed, no one understands it, no one is in charge of it, it can't be lived without, and it gets worse every year."

    Without diving into which unexpected screwups we'll actually see, I should point out that this resonates with climate science. Arrhenius gave us the basic climate warming equation going on 120 years ago, Pres. Johnson talked about the problem to Congress in 1968, and fools that we are, we're still trying to convince ourselves that this is a new problem.

    Here's one likely prediction: internet-free devices will be produced, and they will cost as much as IoT devices do. This one's easy, because shoemakers did the same thing. Back during the barefoot running craze, they produced shoes that were basically modernized versions of decades-old shoe designs with thin soles, and sold them at the same cost as thick-soled shoes. Now that the craze has passed, they no longer make them (I like thin-soled shoes, so I notice these things). So anyway, we'll have fridges without internet connections, selling for the same price as connected ones (if not more expensive), just for those weirdos who don't want to get a text from their milk carton telling them that they're being spoiled.

    Here's an unlikely prediction: instead of a Butlerian Jihad, a Butlerian Crusade, from inside a bible belt somewhere. I can see a jihad to overthrow in the internet arising from Pakistan or Iran, but given religious politics of the moment, I can't see a mass uprising in the west under the term of jihad. It's been too corrupted by a few skatacephs. The way this could happen is that our vast intelligence-industrial complex gets mired down in the Legacy System from Hell, but having long lost their HUMINT skill set and personnel, they don't adapt. People realize that going radio silent is the way to organize, resurrect old-school statecraft and protest strategies, and take down the system. Probably what ultimately replaces it will be some sort of authoritarian dystopia (per the Arab Spring), but it's entirely possible that our digital addiction will ultimately be used against us. Most of the knowledge for how to do this is out there and on paper.

    156:

    Where do you think consumer electronics like IoT devices are actually made and where is the firmware coded? Because I thought it was Shenzhen and Bangalore. And being done by the lowest bidder in cut-throat markets. All this talk about good and bad programmers and CS degrees seems a bit beside the point.

    157:

    You might be interested to know that Ritchie's and Stroustrup's views on C (and C++) are not what they are generally believed to be :-)

    Thank you for the joke - that's new to me, and very nice! May I copy and use it?

    158:

    The correlation between having a CS degree and being a competent programmer is not high. Quite a few companies will not employ CS graduates as programmers.

    159:

    K&R is the Bible.

    And when the programmer orders 0 bheers, the knowledgeable bartender hands in one....

    Why so many bad programmers? Thinking about it, and reading cmts above, it really is two-fold. For one, bad programmers are cheaper, and if they're brought in from out-of-country, or work on the other side of the world, are a lot cheaper. Secondly... there was an article last year in the Atlantic? Slate, entitled "Your HR Department Hates You". In it, they note there used to be Personnel departments, who knew the organization, and worked with the hiring managers. Now, it's HR, staffed by folks who don't know the organization, don't care to know it, and are planning their next job move. Worst I know of was in '09, when I was last looking for a job, and Grumman's website a) had a video at the top of someone talking about Hot Jobs! (Wtf? I'm applying for what I do, not trying to do something I don't have a clue about...), and IIRC, you couldn't even add a cover letter, you just let it slurp up your resume (.doc format, please), and it was sorta-kinda associated with a specific ad, and they'd do database searches to find "qualified" candidates.

    Finally, about old code.... 18 years ago, this one went around, about the COBOL programmer who got so sick of all the Y2K mess that he had himself put into cold sleep, to sleep past it. Records got screwed up, and he slept on. They finally wake him up, and it's the year 9998. He's shocked - everything is long dead and gone, but he's in The Future. He finally asks, "Why did you wake me up now?".

    The response? "Well, the records said you're a COBOL programmer, and we're about to roll over to the year 10,000...."

    mark

    160:

    would be the ability of the spooks to enter the house of a suspect, and install such an extra - or replace one commodity item by an apparently identical one with the extra feature.

    Breaking and entering (aka "surreptitious entry") for such purposes and others is part of the stock in trade of the major intelligence services and has been for decades.

    http://www.other-news.info/2013/07/when-the-nsa-cant-break-into-your-computer-these-guys-break-into-your-house/

    161:

    Re: Re: 'Two tier society, where the pluto/kleptocrats can afford all the latest shinies ...'

    Unable to picture any two-tiered society that's made up of pluto-/kleptocrats and hordes of unwashed, uneducated, trampled-upon minions that lasts more than one generation.

    Yeah, well, I'm in the US, and let me get back to you about that in 10 years....

    mark

    162:

    There is an apocryphal story that on receiving some intelligence about an upcoming early-morning car-bombing, 14 Int broke into a PIRA player's house, and broke his alarm clock. Then "discovered" the waiting car. Then spread a story that PIRA was so incompetent, it couldn't even get to its own bombings on time.

    PIRA then kneecapped their their own bloke...

    163:

    The Y2K problem always was a political, rather than technical, one. I was seriously worried that the UK government would panic, and close down the telephone systems, which would have disrupted water and food supplies and so on. Which would then have built up to a national panic.

    The Year 2038 problem is MUCH more serious, and a great deal of its severity is due to C, Unix and Microsoft (and, to some extent, Berkeley). The UK (including me) tried to alleviate that in the context of the C standard, but failed.

    164:

    Stroustrup's views on C (and C++) are not what they are generally believed to be :-)

    He was doing a lecture tour a year or two ago, talking about C++11; the Edinburgh session had to be rearranged twice to fit it into progressively larger venues. Eventually, it was held in a packed George Square Theatre, with an audience of several hundred; and was rather enjoyable.

    Not surprisingly, he's still a fan of C++ as a versatile language supporting a range of different programming styles...

    :) As for the joke, feel free to reuse - I didn't create it :)

    165:

    The Y2038 problem is significantly less of an issue now -- pretty much all the systems people care about are already 64-bit, and more are on the way. In ten years, it'll matter even less.

    Of course, the real problem there is that there will be systems which have been happily running, without complaint or issue, for 20 years, and so nobody is aware of them to upgrade them...

    166:

    Re: 'Things that can detect some types of bad, or at least dubious, code do exist, though they aren't perfect (and can't be).'

    I was thinking along the lines of building and improving some 'AI' that would spot problems faster than a human could and anything novel that it wasn't programmed to fix would be sent to humans to review ... an 'intelligent' scout/assistant as it were. Assume this would speed up the review, bug hunting/catching and fixing process. And if this AI is built to learn, then it could potentially speed up the detection and fixing process at a Moore's Law pace.

    As long as there are humans writing/using code we'll probably have 'imperfect code' simply because humans have randomly distributed needs and skills, plus they can be impulsive, plus they go through an extended birth-maturation-death cycle which together means much more 'unpredictable' than electronics.

    167:

    As the computational power of these connected toys and devices increases, it will become more likely that someone will push their own firmware or software updates out to unsecured devices. Possibly adding new capabilities to them.

    In general, if these devices are keeping any sort of log they could be very useful in building a profile of users and people living in a house. For example a smart-lock and automated lights/climate control could build a pattern of when you will be in your house or not. (Which will probably be useful to police and investigators as well).

    Besides spying on people, there could be other ways to misuse compromised devices. It could be simple trolling -- have the toys of children spout hate speech. Or more malicious, such as seizing control of the smart features of a home and ransoming it (as was don to an Austrian hotel with their room locks).

    Or even bigger, causing malfunctions in the homes and vehicles and devices of voters on election day to sway the vote in a particular direction. A lot of people when faced with a bunch of headaches -- doors suddenly won't lock, kids toys stop working and the kid is having a meltdown, warning lights come on in your car -- might decide they don't have time to vote after all.

    168:

    That's not the problem, unfortunately, but the way that C and Unix types do not scale with the size of the address space, and the conversion rules. Plus, of course, Microsoft's demented 32-bit long on 64-bit systems.

    169:

    Re: 'Why so many bad programmers?'

    From a consumer/parental perspective ...

    There's also the idea that's been actively pushed down kids' throats since the first personal computers came on the market: any kid can learn to program. And so they do, and they continue self-teaching until someone tells them to stop/takes away their PC. And while this fosters diversity of ideas and approaches in a new/developing field, it also results in uneven knowledge of programming and safe/best practices.

    From a management/HR perspective ...

    Quite a few fields administer various job-related skills tests these days as part of the job interview process. Seems like it would make sense to do the same for programming/coding esp. entry to mid-level positions.

    170:

    Thank you for the joke - that's new to me, and very nice! May I copy and use it? I laughed too and was curious and searched on "software test engineer walks into a bar". Didn't find that variant (though there is one with "QA tester") but did find one on reddit that ends with He walks away without paying. :-) I like Martin's version. (Maybe with the above added to the end.)

    171:

    Of course, the real problem there is that there will be systems which have been happily running, without complaint or issue, for 20 years, and so nobody is aware of them to upgrade them... Problem can be even worse than that. Some years back (maybe 15 years), I looked up how much a few old 20MB hard drives were worth (am packrat), and was amused to find that people were selling them for $500. They were for a industrial version of the 286-based PC-AT, and still being used in such applications and spare parts were valuable.

    172:

    And if this AI is built to learn, then it could potentially speed up the detection and fixing process at a Moore's Law pace. You realize that you could just feed such a system random code fragments (maybe from an adversarial generative network) and let it "fix" them? That this is the stuff of fast-takeoff-general-AI-nightmares?

    Anyway, there is starting to be interesting work in this area. DARPA completed a challenge in the security bug space last year: Carnegie Mellon’s Mayhem AI takes home $2 million from DARPA’s Cyber Grand Challenge or if you prefer DARPA direct: “Mayhem” Declared Preliminary Winner of Historic Cyber Grand Challenge

    173:

    Correction: CloudPets did hash the passwords, using bcrypt. They just didn't require complex passwords, so a lot of the hashes are trivial to crack.

    Troy Hunt has the technical details, if you're interested: https://www.troyhunt.com/data-from-connected-cloudpets-teddy-bears-leaked-and-ransomed-exposing-kids-voice-messages/

    174:

    Along the lines of what you're talking about, abandonware = poverty:

    There's a lot of talk about unsustainable investments in infrastructure from strong towns and james kunstler.

    In a nutshell: 1) real earning power of the middle class dropping, widening GINI coefficient. (readers here should be familiar) 2) Specifically in the US, local tax base doesn't pay for all local-use infrastructure. The Feds can pay for your shiny project. 3) Replacement cost of shiny project not budgeted for. 4) Even maintenance cost of shiny project can be horrible burden locally. 5) Often times the need is for an affordable project but the Feds only kick in after the project passes a threshold so some affordable becomes shiny project, overbuilt. 6) Taxes can't even pay for services. Example is culdesac road. Snow removal is $X/yr but total tax revenues from all households doesn't even pay half of that, let alone contribute for police and fire protection.

    We also see garbage hardware designs where parts aren't even available five years after the model is discontinued.

    So the abandonware problem plays exactly into that scenario. Lovely.

    175:

    It generally takes a long time to design those tests, in the US at least you have to be able to show they aren't biased against a protected group in any lawsuits that might come up. And as a result, good tests are expensive. IT is an industry that has rapid change, did you budget for a new test every year and if not what good is the test that you bought 5 years ago?

    176:

    I'd like to say that programmers and engineers are aware of the problem and working hard to stop it, but the evidence suggests otherwise. As a profession, these people need to up their game.

    I write burglar alarm software for a living (right now, anyway). The naive and foolish probably think software security is an important part of that, but know first hand that the people who actually install the alarms do not like secure software. Since they're our customers, our opinion of the interests of their customers is filtered through "what makes it easier for the installer".

    The most annoying thing we have ever done is deliberately preventing our app from activating to the account rather than the phone. Viz, when user gets a new phone the app installs but "doesn't work" until user goes through the activation process (which proves they have physical access to the alarm as well as the root account therein). We got abused by installers, by end users, by our foreign sales reps, basically everyone and anyone who ever installed the app.

    So now, if you're signed in to Google or Apple, you can add a new device to your account and after a short pause for the install but with no further action on your part, you can run the app and turn "your" burglar alarm on and off. It's very convenient.

    You can require the alarm code/PIN for that action, but we have the stats and less than 1% of users do that.

    WRT programmers, John Quiggin has accidentially produced a similar discussion on Crooked Timber that you might find interesting: http://crookedtimber.org/2017/03/01/in-praise-of-credentialism

    177:

    We also see garbage hardware designs where parts aren't even available five years after the model is discontinued.

    We have a small warehouse full of discontinued hardware for this reason. When chip announcers tell us they're discontinuing/call for last orders there is always a vigorous discussions about how many to order. It's harder than it sounds, because very few people tell us when they stop using our gear. So we have to guess at how many we'll need. Sure, each one is cheap, but when you have to rent a warehouse to store hundreds of thousands of them the costs mount up.

    One joy for us is that with internet connection becoming the norm we actually have a fair idea of the drop-out rate. This helps a lot, at least for newer gear.

    We also learn about older hardware in entertaining ways. I love the people who say "I have a panel that's about 20 years old, so I put the latest firmware for the latest panel on it... and now it doesn't work". Sadly, our really old panels are too dumb to support firmware verification or even device identification while flashing firmware. Some processors are less than 8 bit! So you can attach the programmer and flash whatever you like to them. They don't even error out when writing 256kB of program to 16kB of storage, they just ignore the high bits in the address.

    But, and this is important: burglar alarms do not provide security. Instead they try to tell you when someone breaks into your house. The difference is subtle, and beyond some people. Both false positives and false negatives are more common than real alarms.

    I spend a lot of time building software that essentially makes one decision "I haven't heard from that panel for a while, I wonder if someone has chopped the wires/burned down the house?" It is not the easiest decision to make accurately.

    178:

    On the question of mediocre programmers, please take globalization into account.

    Unless you believe Trump/May/Le Pen/etc. are going to actually introduce autarky, expect an increase of programmers within the next decade. This increase will encompass both good and mediocre programmers. This may drive down wages of mediocre programmers while at the same time increasing their absolute number.

    179:

    expect an increase of programmers within the next decade

    That's going to come from population growth rather than further reaches in globalisation I think. Sure, there will be new programmers writing new code in South Sudan as they get more computers, but to a large extent they will be replacing Indian or Filipino programmers as software in those countries matures and consolidates.

    Outsourcing is already chasing the ever-poorer, ever more remote corners of the "code monkey" market (possibly even using actual monkeys, based on what I see). We no longer outsource to India because it's too expensive, but Malaysia and Vietnam have stepped into the gap. It's only a matter of time before we start getting code from DRC and Mali. Or North Korea {eyeroll}.

    180:

    Plus, of course, Microsoft's demented 32-bit long on 64-bit systems.

    Let me guess - it's a long long story? ;)

    181:

    I see the internet of things (along with uplifting via genetic engineering) ushering in a future where we will all live in a horror/fantasy/Disney movie.

    A world with sentient magic mirrors that evaluate your physical health parameters and let you know via statistical algorithm if you really are the fairest in the land.

    We'll pal around with talking dogs and other sentient animals with cute and endearing anthropomorphic traits.

    Furniture and other inanimate objects becoming very animate and alive like the servants in the Beast's castle.

    Palantirs that allow surveillance of any spot on the globe.

    Sentient vehicles that can make decisions like Pixar's "Cars".

    Trees that talk and walk slowly, or through apples like those in Oz.

    Hackers and geneticists will be the new wizards, both good an evil.

    Golems and Frankenstein monsters of deceased loved ones will come back to life.

    Ghost-like holograms will allow us to communicate with the dead via mediums like Madame Blavatsky.

    Weeping angels, flying broomsticks, dragons, krakens and werewolves will all be real, manufactured in factories or grown in labs.

    Magic cauldrons will create anything from carbon fiber.

    Brain to brain jacks will make telepathy real and quantum computing will allow prophecy.

    Possession will be real with devilish malware possessing your refrigerator, and making it vomit green pea soup out of the ice dispenser, forcing you to call a programmer/exorcist ("The power of Microsoft compels you!").

    Someone needs to write an SF book series set in a high-tech future whose technology has effectively turned the world into a medieval fantasy.

    And all I really want is a talking dog.

    That would be cool.

    182:

    Searching the comments, these are words and phrases that, as I type this, do not show up:

    Complexity Emergent Systems Theory Complex Systems Feedback Loop

    I modestly submit that this is an omission. Emergent behavior is a thing.

    In other words, the kind of broken-but-you-can't-afford-to-remove-it-or-shut-it-down infrastructure layer described in OGH's question, and well elaborated in comments so far, is going to come alive. Not "alive" as in "malignant sentience" -- I don't exclude aware life with some degree of volition, just don't see it as likely (or honestly, as dramatically compelling) -- but "alive" as in "stagnant pond in the tropics".

    In this environment, there will come into existence self-sustaining information entities, subject to evolutionary pressures, that will eventually arrange themselves into structures much like biological ecologies, moderating energy and information throughput. Most of them will exist at the level of microorganisms, I'd guess. Maybe some forms as complex as ants or algae, bees or parasitic worms. They'd be very good at attack, defense, predation, evasion, gathering informational and thermodynamic nutrition, and so on.

    What would this stagnant pond analogue become?

    The possibilities seem immense, in SFnal terms.

    So, that's the first point.

    The second point is that, in any kind of dystopian abandoned infrastructure or neighborhood or former country, as long as power is getting to this information ecology, it will continue to flourish. It would eventually become a very strange environment indeed, although perhaps not visible to the naked eye.

    This realm, this subnet where the wild things are, might be largely imperceptible to people who have abandoned information technology.

    It might, however, be devastatingly hostile to people or drones or autonomous robots or -- of course -- software entities. Those who attempt to bring their pads and wifi implants and field networks in: to do some kind of exploration, say, to do some work; might find astonishing problem sets.

    Full Credit to WreWrite and John Ohno for mentioning Gibson's voodoo, and Vinge's novel with its horribly capable emergent information entities, respectively.

    183:

    There are commercial and open-source tools for this

    Which can only test for common programmer errors not "intent of the user" problems. My masters degree was in the area and it's fun.

    An awful lot of software errors happen between the user and the programmer rather than in the code. Most commonly some dude makes a cool new thing that does a particular task really well. Everyone goes "ooh". Later other people discover that the program also does, or kind of does, other useful things. That is a problem, "kind of does what it should" is otherwise known as a bug.

    working either by reviewing source code for potential defects, or monitoring a running program for defective behavior. Neither is simple to write

    Or to use. The false positive rate is awful, to the point where much of the manual often consists of instructions for disabling bits of the tool over bits of your code. I can manage to get gcc/gpp to mostly not complain about my code (except standards compliance, where IDGAF whether the language feature I'm using is in the standard or not).

    With cppcheck I've been forced to decorate my code quite extensively to deal with useful warnings that false positive a lot. Every single one needs to be checked, but sadly the "file with list of things to ignore" uses line numbers rather than code snippets. So every time you add or delete a line above the suppression it becomes invalid. Multiply by 2000-odd suppressions and the effort to maintain the checking tool, becomes significant.

    Not to mention conflicts like the one between OpenSSL and Valgrind, both sides convinced that the other is wrong and in the meantime every single user of both systems has to put up with valgrind spewing copious warnings about OpenSSL's improper memory use.

    184:

    OK, so here's your exam question from Database Design 101:

    I've got this schema already as part of a directory system. Omitting various fields irrelevant to current exercise-

    Table "Persons", fields PersonUID (unique), NaturalPerson (boolean), DateofBirth, DateofDeath, Meta Table "PersonNames", fields IdPersonNames (unique), PersionUID (references Persons.PersionUID), Name, CommenceDate, EndDate, Mode Table "LinkTypes", fields LinkUID (unique), Vertical (boolean), Relationship (text), Contra (text) Table "PersonLinks", fields IDPersonLinks (unique), PersionUID (references Persons.PersionUID), LinkedPersionUID (references Persons.PersionUID), LinkUID (references LinkTypes.LinkUID), CommenceDate, EndDate

    Assuming one has a content owner ("content tenant" probably more appropriate term) table of PersonUID values, and another of permitted relationships using LinkUID values, it is then a (comparatively) simple query on Person_Links. The only significant complication is to cover all the date periods to ensure that the relationship is current (covering divorces and bereavements).

    A content supplier would probably want relationships to differentiate between children residing with the content tenant and those who are grown up &/or non resident, to allow content supply to one group while only sending marketing to the other, so would use two different Link_Types only one listed in the permitted access table.

    However, this schema can cover any relationships required, so not only polyamorous marriages but also employment, membership of clubs, people who've met in a pub...

    185:

    Also, we already live in a world where software is written on layers of archaeological garbage that no-one understands. TCP, for example, which powers your web browser. Aside from the known faults, there are funky interactions between systems of different levels of brokenness that are hard to deal with (often you just fake a fix by adding a different intermediate device).

    I see a lot of stuff now that tries to get back to basics, like the Arduino platform, but sadly I found that they buy a lot of (literal) black boxes to do various things, and there's no way to control or audit a lot of those. USB interfaces, for example... Arduinos just use a stock USB chip so the stock USB firmware-level malware will infect that just like everything else. Likewise, buying a display for those without getting a free bonus general-purpose CPU built into the display is basically impossible - you end up making one out of clumps of LEDs or 7-segment LCD displays.

    Hopefully over time that project will develop and become more capable while remaining open source. Ideally someone will come up with a way for the Arduino to both self-audit and audit connected devices. Quite how that would even work at a theoretical level I cannot imagine.

    186:

    The astronomical (Water) bills are already a problem in some municipalities in the US; Detroit has a couple of suburbs that contracted out their water departments to the Hedge Funds, etc.

    NOT a good idea, unless you are an executive in private equity. Quadrupling of rates has happened.

    187:
    Also, we already live in a world where software is written on layers of archaeological garbage that no-one understands. TCP, for example

    Er. No.

    Isn't everyone friends with at least four people who have independently implemented IP stacks?

    188:

    Isn't everyone friends with at least four people who have independently implemented IP stacks?

    Not really, no, only the people whose friends have taken enough CS courses. Most people don't do IP stacks as a hobby. Also, is every one friends with at least four people who have independently implemented any TCP/IP stack in common use?

    It's quite different to write your own, or figure out how the one in for examplw Windows 10 works, reliably enough to grok it well enough to fix it. Or even Linux.

    189:

    I ... can't stop laughing at your assumptions.

    190:

    Now I'm lost. Please share my assumed assumptions with me?

    What I'm trying to say is that implementing something complex and on a quite fundamental specification level broken (like TCP/IP) from scratch as an exercise is quite different from working with the at worst decades old legacy code and assumptions.

    191:

    Er Yes. See Eldery Cynics comments @33

    192:

    So a sort of 戸籍 (Koseki) on steroids? Interesting...

    193:

    "Roadside Picnic" Except home-grown, you mean?

    194:

    Try reading John Ringo's Council Wars series. Although this is from the perspective of when it stops working...

    195:

    Something similar is described in World Enough, and Time by .

    It was apparently the first volume in the New World series, but I only read the first so no idea what happened afterwards.

    197:

    AI for code validation has been tried before - the results weren't pretty.

    198:

    No particular surprises there. The boundaries of death merely reflect the current state of the art in detecting and defining death just as previously checking solely for a pulse/heartbeat did.

    The schlock mercenary strip even has a classification system called the Lazarus scale its not inconceivable that something similar will be needed IRL soon.

    199:

    A doctor friend of mine keeps telling me that death is actually one of the harder things to diagnose. I find that a a bit concerning really.

    200:

    Isn't everyone friends with at least four people who have independently implemented IP stacks?

    Yes, that's exactly what I was talking about.

    Take something both complex and subtle that a lot of really smart people have been working on for a long time trying to get it to work properly across a wide range of situations, and bash out a rough approximation in a couple of weeks. Or years, whatever.

    Then the aforementioned smart people have to add another set of "when the other end does this..." rules to cover yet another type of kinda-sorta-works-if-you're-lucky bullshit floating round the internet. Imagine you have a big fat pipe listening to things on the internet, and for some reason actually want to communicate with them. Say you're hosting a website, for example. Now you're not just dealing with a few sane people carefully crafting known-to-be-tricky software, you've got any muppet with an internet connection making requests, probing you, their malware is trying to corrupt you, while their cheap'n'cheerful all-in-one ADSL modem/router/wifi AP does its (not very good) best to keep the connections alive despite not having had a firmware update since 2001.

    Look, I know I am way too dumb to write an IP stack, I barely understand what it does. I mean "packets go over the network", yup, and something about layers. But the interaction between packet size and routing decisions between Sydney and London? Snort, I have no freaking idea. Why do some Telstra Cable modems allow the first couple of days of 2 byte UDP packets through, then block that device from using that UDP port until you hard reset them? I wish I knew. Why does a private VPN using GPRS sometimes have traceroutes that leave the country? And who the hell decided that iPhones should only accept UDP packets less than 9612 bytes?

    So please, tell me more about all these people who have successfully implemented robust and correct IP stacks.

    201:

    death is actually one of the harder things to diagnose

    Nah, it's really simple. You just get someone else to define death unambiguously, then test against that definition.

    I say that nothing ever dies, it just gets eaten and becomes a different sort of life.

    202:

    Finding trustworthy information on the subject is difficult, because the search results are so grossly polluted with propaganda and bullshit - split roughly 3:1 between "smart meters cure erectile dysfunction" from the "pro" brigade, and "smart meters cause erectile dysfunction" from the "anti" brigade, who unfortunately seem to be largely composed of dribbling loons - you know the type, insist that "electromagnetic radiation makes them ill" but only ever feel ill when they can actually see an object which they have been told emits it (whether switched on or not) and are fine when they don't think there's any about (even if they're in Sutton Coldfield) - so I don't feel any confidence in the validity of any other information they provide either...

    But the occasional snippets from sources which are reasonably reputable and don't have an interest in plugging either side of the argument all seem to agree that there is a right to refuse a smart meter in the UK and you can't be penalised for refusing to have one.

    (Personal and probably not-all-that-significant snippets: I did have a smart meter salesman knock on my door; I told him "over my dead body", and from his reaction I got the feeling plenty of other people had said much the same...

    Comments from various people about what happens if you don't let them read your meter: They never even try to read my meter. They never send me a bill either. I pay by charging up a token at the local shop and then plugging it into the meter. Lots of people do this; vastly more convenient than all the hassle of accounts/readings/bills and also much more robust.)

    203:

    I'm a programmer. Here are a few words on the subject of good programmers, mediocre programme, and their place in the world...

    If you go by the sort of industries I've worked in, the sort of companies and brand names I've been associated with, the sort of products which my handywork has ended up in, and the amount of money I've sometimes been paid you might get the idea that I was a good programmer.

    But I know better. I've been privileged enough to work with some properly good, so clever it actually scares me programmers, I've seen the difference, and deep down inside I know that I fall into the mediocre programmer category. There's no shame in that, writing a really good computer program or understanding a non-trivial problem domain well enough to write a really good program to operate within it is hard and being just about good enough to do those jobs well enough to produce something fit for purpose isn't a trivial achievement.

    Here's the thing though, I know I'm a mediocre programmer and I work accordingly. I make sure I make the best use of the resources I have (principally my own knowledge, experience and ability, the knowledge, experience, and ability of my peers within the organisation, and where possible and appropriate the knowledge, experience, and ability of my peers in the community at large) to do the best, safest, most secure job I can within the limitations of the resources available and in spite of my own mediocrity.

    I believe my acknowledgement of my own mediocrity makes me a better, safer programmer than a lot of people who don't known that they're mediocre. Really, properly, genuinely good programmers are always going to be in short supply and, given that, the next best thing is programmers who know that they're mediocre (or at least are preepared to accept the possibility) and figure their own mediocrity and that of (most of) their peers into their designs, coding styles, testing regimes, QA procedures, and general working practices...

    204:

    You still occasionally hear of people in hospitals waking up and finding themselves in the mortuary. What worries me is the ones that you don't get to hear about because they've already screwed the lid down...

    But I do find the linked article a bit strange, because I thought that the only sign of death that was regarded as fully conclusive was cessation of brain activity. (The mortuary wakers happen because they usually just go by more straightforward signs like cessation of heartbeat, and don't bother to test for brain death unless they're intending to go straight in and scrounge spare parts.)

    205:

    I pay by charging up a token at the local shop and then plugging it into the meter. Lots of people do this; vastly more convenient than all the hassle of accounts/readings/bills and also much more robust.)

    Are you in the UK? Afaik its a spectacularly expensive way to pay for your 'leccy.

    206:

    I believe my acknowledgement of my own mediocrity makes me a better, safer programmer than a lot of people who don't known that they're mediocre.

    QFT. I was mulling posting a similar sentiment. I believe quality of coders follows - and will always follow - a fairly standard normal distribution, therefore the vast majority of coders will always be mediocre (average) or worse. We should note however that we are grossly simplifying the software development process by calling it "coding" though - I'm a lousy coder but a pretty decent software professional (imo!).

    207:

    I believe my acknowledgement of my own mediocrity makes me a better, safer programmer than a lot of people who don't known that they're mediocre. Really, properly, genuinely good programmers are always going to be in short supply and, given that, the next best thing is programmers who know that they're mediocre (or at least are preepared to accept the possibility) and figure their own mediocrity and that of (most of) their peers into their designs, coding styles, testing regimes, QA procedures, and general working practices...

    I's like to work with you more than a better programmer who wouldn't think of thise things. Most software nowadays is a group effort, and getting a good team (which can also teach new people when the group composition fails) is in my opinion more important than having brilliant single-performers.

    Many people can hack code. Most of them can hack code which is so clever nobody understands it though it works. Software engineering needs more than hacking code, though.

    208:

    Yes, precisely, though it's not actually the coding, so much as the design. The signature of the failure I found was very distinctive, and I hit it on three different Unices that had no codebase in common (and on three distinct hardware platforms and transport mechanisms). It simply HAD to be a design flaw in either the specification or the Berkeley reference implementation (which I assume they had all modelled their design on). The best 'expert' I found (who I was told by a reliable source was THE top UK expert) claimed that the failure mode was impossible and therefore wouldn't answer any more detailed questions ....

    I could give you other examples, including one where there were probably never more than a dozen real experts and are probably none left, but which is still important. Nowhere near the level of TCP/IP, but still important. One of the repeated stories of my career was getting stuck, looking for an expert, only to find that I knew more about that particular aspect than the expert did :-(

    The point here is that this is NOT simply a matter of the code, but about the way that the technical aspects of IT are no longer primary, and the response to a failure mode in a poorly-understood component is NOT to find out why and fix it, but to say "don't do that" or add a layer that doesn't trigger it. I believe that SOME embedded markets are better, but I know for a certainty that they don't include biomedical or automotive, and have heard that even aircraft control is heading that way.

    Furthermore, this has led to an increasing inclusion of a general-purpose component (up to and including a complete operating system), with the assumption and claim that most defects in that component don't matter because the including system doesn't use the feature with which they are associated. I have been told that the networking of devices wasn't a security risk because was enabled only during initial configuration. But checking whether a feature is permanently disabled isn't any easier than checking whether a human is dead. After all, a modern system has as much code as human DNA, and many have nearly as many states as the human brain.

    As OGH implies, be afraid, be very, very afraid. And, to misquote Haldane, "the future is not only weirder than we imagine, it is weirder than we can imagine." The one thing that I can confidently predict is that, no matter how outfield predictions are in this thread, the result will be more so.

    209:

    Reading code is harder than writing it, therefore if you are writing code to the limit of your ability then you aint going to be able to read it again.

    I learned this and stopped trying to be clever shortly after leaving uni.

    210:

    Absolutely. I think it's possible to simultaneously be a mediocre programmer and a pretty good engineer. I own up to the former, aspire to the latter, and like to think that my career indicates I've not done too badly in living up to those aspirations. :-)

    In case you're interested I'm currently spending a lot of time clearing up after a (former) colleague who may have been a good programmer but was demonstrably a very poor engineer...

    211:

    In other words, software engineering/programming should become like Herman Wouk's Navy, "a master plan designed by geniuses for execution by idiots"? I agree; I don't much like it but I agree. Software has become too big, too important and above all too vulnerable for the old individualistic ways of our youth.

    212:

    Take something both complex and subtle that a lot of really smart people have been working on for a long time trying to get it to work properly across a wide range of situations, and bash out a rough approximation in a couple of weeks.

    Which is more or less what nginx (or node.js or strawberry py) is versus apache. Sometimes the next generation has a higher level abstraction that really does obviate all those weird accommodatt ions to reality that ours thought were necessary.In the old days you wouldn't write a web server, you'd use apache with modc, modperl or (Dog help us) mod_php and most of these would provide just as many opportunities to fuck stuff up as writing your own webserver does. These days you use node or one of many things like it to write your own webserver, but one that is itself a standard library and you just implement your application as a bunch of callbacks. It's faster and safer for a whole bunch of reasons. Sure that's not the same as reinventing stuff, but the point is that someone had to do the reinventing to get us there.

    213:

    "I said "operate wirelessly, or at least has the 'option'", not "wireless connectivity". We have a central-heating controller like that, there are doorbells, and so on."

    Wireless doorbells are great for implementing one-bit short-range links: cheaper than buying the chips and RF units separately (and it's difficult to find the chips on their own these days, though it used not to be), and you get handy extra bits like a nice case with a battery compartment and lots of room in it for extra circuitry.

    The transmitter is nothing more than a sequencer that modulates a carrier with a fixed bit stream - equivalent to a couple of 4017s and a 555 all on one chip. They all operate on the same carrier frequency, so four or five bits are usually set by jumpers to implement "channels" in case the people next door have got one as well. (Another level of discrimination is provided by every bugger who makes one using their own slightly different choices of parameters for the bit stream timing.) The receiver clocks the received bit stream through a shift register and gives an output when it detects two identical consecutive sequences that match the correct pattern (again, just a bunch of flip-flops, a clock oscillator, and a multi-input AND gate). For applications that involve transmitting more than one bit of state, like wireless CH timers or TV remote controls (which are the same thing, just using IR instead of UHF), some of the bits at the end of the sequence are determined by which button you press / whether the timer is demanding CH or HW / etc. There is no intelligence, and no hacking potential beyond next door's kid setting their unit to the same "channel" as yours.

    "Maintenance engineers connecting wirelessly" implies a much greater level of complexity; beyond burglar alarms, I'm pushed to think of any household gadget for which the concept of a maintenance engineer "connecting" makes sense.

    "Thirdly, are you sure that they don't have it physically installed but not documented or configured? It's extremely common to use the same controller across a range..."

    Have not observed this in anything I've taken apart. The design philosophy attaches considerable importance to "what can we do to save a penny every 1000 units", so while it does make sense to use the same software for everything and enable/disable features as required, it also makes sense not to include the extra hardware for a wireless connection when you're not going to use it. The circuit board may well have the tracks and pads required for all the possible hardware features of the top of the range model, but in lesser models those pads which relate to features the lesser models don't have don't have any components soldered to them.

    214:

    Reading code is harder than writing it, therefore if you are writing code to the limit of your ability then you aint going to be able to read it again.

    And other people won't be able to read it either, and code is read more often than it is written. Programming languages are for people not computers. Also 'clever' code actually isn't because the optimisers in compilers are good at recognising standard patterns unless you obfuscate them with cleverness.

    215:

    [shrug] I find I need to put a tenner on it every couple of weeks or thereabouts. Even for me, O(five pounds a week) doesn't count as expensive.

    How the price per kWh compares with anything else I don't really know, and can't be arsed to find out because there is no potential for significant savings, but there is significant potential for extra hassle.

    216:

    Assuming giant cruft in the internet of things, along with long-lived legacy installs, and continued Moore's law style growth in neuromorphic computing capacity...

    The simplest way to keep things working might end up being deploying benign self-limiting artificially intelligent viruses that would hack every connected appliance in some local area and handle maintenance and functioning. Oh look, genus locii. The formalisms for determining which device went into which domain might be interesting.

    On a more prosiac note, a large enough supply of nest thermostats might allow for a ddos attack on electric or natural gas.

    217:

    To answer the OP, the IoT will bring about The Dark Times, when the internet goes belly up for a length of time. Whether that's a day or a month or longer. Completely unusable.

    Yes I know it's supposed to be impossible, but what the heck, it's an answer :-)

    Now... I am an okay developer, I usually seem to be ranked #3 or #4 in places I work with 20+ devs. I've worked with people scary good, orders of magnitude better than me in every facet -- it's great, helps to keep my ego in check. The range between (say) John Carmack and the least competent professional developers is incredible. We've quantified people as being a liability just by being assigned to a team. You're better off sending them home.

    I recently worked with an outsourced team -- devs literally straight out of university. First jobs, there were many many things they did know -- stuff you forget you've learned (it took 3 hours to explain to one dev the difference between a file path and the contents of the file -- fortunately the phone had a mute button). Yes, there should be processes in place to make sure the whoppers get caught, but realistically most of the test cases will be about functionality the customer uses not securing the code. And it's just expensive to write rock-solid code.

    It takes a fairly mature software house to build in security into their processes. Microsoft understood this a long time ago (anyone who still thinks otherwise has been in a coma for a decade). I don't see small manufacturers being able to produce rock-solid secure code. It's really really hard.

    I could ramble more, but I'll leave you with this: Said the the .Net developer to the JavaScript developer, "These tasks show promise."

    218:

    Remote controls are migrating from IR to Bluetooth on cheaper devices. My Amazon Fire Stick (£19.95 on sale) has a Bluetooth remote. The amount of computing power and software in cheaper devices is also increasing - Fire Stick is a dual-core ARM running Android. The new version is a quad-core ARM. I have a £8.95 portable wifi bridge extender that is running BusyBox if you telnet to it. These devices have huge hacking surfaces.

    219:

    I'm pushed to think of any household gadget for which the concept of a maintenance engineer "connecting" makes sense Proper home automation systems definitely need engineering access to configure properly. Lighting, blinds, AV switching, heating etc, particularly in terms of tweaking the timings and scenes. But they are almost always hardwired to a controller, though the newest stuff we've been playing with is mesh wifi.

    I'm with you ... most of the IOT gadgets are just a waste of time for idiots with more money than sense ... classic catalogue filler. The Philips Hue stuff is a pretty low quality offering, not surprised it has security issues along with everything else it does badly.

    Hmm, the individual Loxone mesh stuff is configurable over wifi, but you need the engineering software to do it, the stuff the end user has access to abstracts all of that away. And almost always you push changes through the control server anyway simply because it is easier.

    220:

    "The sad thing is that so many of these people are running embedded Linux, which has extremely powerful security available, and their not using it! WTF?"

    WTF? CBA :) Also, how out of date is their kernel? (This also has some connection with EC's comment at #110.) I get the impression that several years ago, some bunch called Montavista hacked down Linux kernel 2.4.17 so that it would run in next to no memory, and ever since then anyone who wanted an OS to run in a minimal system has just grabbed that and used it. While the number I've investigated is only a tiddly proportion of the number available, the number of so-called "routers" I've found that run that kernel greatly exceeds the number I've found that don't.

    Those things tend to sit there untouched from the moment they're bought until their electrolytic capacitors start drying out, with no updates ever applied to firmware that was years out of date when they were new. And their security problems aren't just down to failure to use the security features of Linux, but also to undocumented features that can only be the result of deliberate action and are also really fucking stupid. Things like open ports with non-standard numbers, on the internet-facing side, running telnet servers that allow passwordless root logins, with nothing that tells you they exist and no setting to turn them off. (Fortunately I managed to make them inaccessible by telling the thing to NAT those port numbers to non-existent services on the LAN, but most people wouldn't even know they existed, let alone be able to invent a workaround.)

    Although it is still surprisingly easy to get versions that do not have wireless hardware - and rather handy they are too; usable as a general-purpose Linux platform at a third of the price of a Raspberry Pi including built-in RS232 port, PSU, and case - that makes no appreciable difference to the fact that trusting one of these things out of the box basically means your LAN can be assumed to be wide open unless proven otherwise, even if it's all ethernet. Even if you do have your own firewall as well, they can, if hacked, still snoop and edit all your traffic - including SSL, with no warning if malware can manage to install a hooky CA in your browser.

    And this is an instance where I will admit that it is basically impossible to avoid the security hole, since you cannot get anything which is just a pure PCI-to-ADSL interface under the control of the host OS. All the so-called "PCI routers" or "PCI modems" available these days are nothing more than the whole of the circuitry of one of the external boxes, lifted lock-stock-and-barrel and stuck on a PCI card, with its ethernet port hard-wired to a boggo PCI NIC chip. (And at several times the price of the same hardware in the usual external box: they must be one of the most pointless devices out.) There is just no way to get hold of a pure ADSL interface without a whole extra unnecessary full-on computer to run it, full of unknown vulnerabilities and hidden behind a you-don't-need-to-know-this interface, unless you build it yourself from scratch.

    221:

    Hi all, long time reader sometimes poster, rarely about what I know, but this I know!

    The IoT gives me literal nightmares. If the road to hell is paved with good intentions, then the road to destruction is paved with great idea which are over used, html and http are freaking ace and will kill us all.

    To talk you through this let me start with some basics. 1. Your browser can talk to anything that talks http and pull content from anywhere. You see this when you see an embedded map, or google/faceboook oauth. 2. Javascript is turing complete, with full access to a network (limited) stack. It is 'limited' to connecting to HTTP servers, and websockets. You can run 'serious' software using just a browser. 3. Every un-patched OS/Application that is more than 2 years old is vulnerable to some drive by passive exploit if you're running a http server. 4. Every IoT device runs a http server. 5. Every IoT device will be (at some point in the future) will be vulnerable to drive by http sourced exploits if YOU browse the net.

    People don't really get that getting severed a malicious script (in an ad, or what ever) via a rock solid, fully patched browser is capable of hacking every IoT device on your local network, behind your nice shiny firewall, and even if the version of Linux or WTF is fully patched today, it won't be in 12 months.

    Best of all, this isn't something your browser can just patch, its the feature that makes the internet work.

    So back to the question, what are the unanticipated downsides of the decay of the IoT? Vertically integrated hacking corporations, flush with VC from gaming the stock market decides to take over literally everything, because as of today literally everything that serves out a web page everything is vulnerable.

    222:

    Imagine if surface coatings similar to what that story describes evolved into a parasitic version, that found that colonising a human skin was a handy way to move and spread...

    The relevant properties of the human would not include the human's capacity for movement - the stuff seems to be able to deal with that need itself. It would be interested in the human body as a structural scaffold to overcome the structural weakness of being a thin film, with an appearance that puts humans off attacking it. As long as the film was opaque, whether or not the human inside was still alive wouldn't matter - or indeed, better if not, as a dead human would not be trying to fight the stuff's own movement. It could replace the tensile structures that articulate the skeleton with its own substance, allow the skin to transform into a sort of leathery inner layer to provide local support, and just let the rest rot away - keeping the various stinky gases and liquids sealed within and voiding them down a drain whenever convenient. Of course, this all takes time...

    223:

    That depends a bit on your ability to do 2 things:-

    1) Use a programming language that allows meaningful names for programmer created entities. 2) To then actually create and use meaningful names.

    For example:-

    KKDegreesToRadians : constant longfloat := 180.0 / Math.Pi() ;

    KKRadiansToDegrees : constant longfloat : 1.0 / KKDegreesTo_Radians ;

    FLAzimuth : PKDegreeTypes.FLAngleDegreesType ; -- PKDegreeTypes.FLAngleDegrees_Type is declared as range 0.0 .. 360.0

    FLSinAzimuth : Long_Float ;

    Begin

    -- Read FL_Azimuth from environment

    FLSinAzimuth := Math.Sin ( LongFloat ( FLAzimuth ) * KKDegreesTo_Radians ) ;

    -- Other code

    End ;

    Are you in any doubt about what that code does?

    224:

    Oddly enough, I'm migrating off prepayment meters as soon as I can possibly arrange it. From the supplier who omitted to organize the reset of the gas meter when I bought the house (leaving me paying off a fairly substantial balance from when the house was empty), sending a demand for a bill I'd never received (and almost certainly was not for me anyway), having a support line that shuts down at 17:30 (and having booked me an engineer appointment for a Thursday, enters it onto the system for the previous day....)

    Then the proprietor of the local InConvenience Store declines to reload my prepayment card "you never shop here, only top up the cards, and that machine costs us money". (That was the final straw and has cost them any chance of repeat business and gained them a complaint to the service provider.)

    Ex-rental property: the gift that keeps on giving.

    225:

    Getting a CS degree is harder than getting some other kinds of degrees (and because of inflated wages and PR, CS degree programs attract lots of people incapable of completing them), but getting a CS degree is a great deal easier than becoming a competent programmer. A handful of companies are hip to this & have other mechanisms for sorting through applicants for entry-level positions, but most proprietary code is not written by tech companies -- most proprietary code is written under conglomerates that employ a variety of different kinds of workers performing different kinds of tasks with a shared HR and shared hiring practices. This means that there are a lot of positions that are filled based on the assumption that a CS degree is as valuable for sorting through software engineer candidates as a French degree is at sorting through potential French translators. Outside of conglomerates, in the start-up world, we instead see optimization for "cultural fit", which essentially means looking for people with similar hobbies to the existing employees rather than looking for people with the appropriate technical skills. In both cases, this inefficiency is subsidized by big stores of cash controlled by people who neither know nor care about optimizing for technical competence (the umbrella company or the VCs respectively). Short-lived fashions rocket through the industry in part because it's dominated by people who lack the experience to distinguish between shallowly and deeply interesting ideas, old and new ideas, and (in many cases) good and bad design decisions.

    (It might be sensible to treat software engineers like we treat doctors: an undergrad program, a graduate program, and a multi-year mandatory internship at a 'teaching' institution where they do the scut work, before we consider them qualified to start working independently. Eight years of heavy experience is barely enough to bring a software engineer with natural talent to the level of barely-competent, but it's an improvement.)

    To reiterate, a major source of our current systematic technical problems is that few organizations can reliably distinguish between code monkeys and hackers, and those organizations that can often aren't incentivized to do so. This is not sustainable, in multiple ways: technically incompetent organizations are propped up by funding mechanisms that provide money in exchange for charisma (see: Theranos, pretty much any 'unicorn') and throwing money around at that scale without actually optimizing for profit will eventually drain the VCs; as technical debt accumulates, making fixes becomes increasingly time-consuming and expensive, so bringing in competent people becomes more important.

    As for globalization -- as developing countries get more infrastructure, the number of developers from there will increase, sure. But, it's not going to outstrip demand. We have weird high demand and high wages for developers right now, and it attracts people who don't know what they're doing, because for each code monkey we hire we need ten more code monkeys to fix the mistakes of the first one. If we raise the minimum competence standards, we won't see an influx of code monkeys, because we're factoring out the code monkeys and cutting off the feedback loop -- we're hiring only people who can avoid the worst mistakes and fix most of the ones they make (i.e., people who provide positive rather than negative value). A greater proportion may be from recently-industrialized places because, as wages drop, people from the west who are in the industry primarily to make a quick buck will drop out (even if they're above code monkey levels of competence) while those wages will still look attractive to someone living in a place with cheaper living expenses (or supporting a family who lives in such a place). However, once average code quality gets better, expected code quality will get better, and anybody in the market will need to have high standards in order to survive.

    226:

    Knowing your limits and making an effort to be safe & write clean code is, essentially, being a good programmer. Going whole-hog into complicated technical tricks without needing to is the mark of an amateur.

    When I say "mediocre programmer", I'm actually talking about several different classes of syndromes. One can be extremely clever yet still be mediocre if one isn't wise enough to know when being clever is foolish (which is almost all the time). Not knowing how to use best-practices like revision control, syntax highlighting, build systems, and automated testing makes one mediocre. Not knowing when to avoid best-practices because they are inappropriate makes one mediocre. Grasping only a handful of languages makes one mediocre. Lacking the necessary theoretical background to understand, say, hashing, sorting algorithms, time and space complexity, or state machines makes one mediocre. (It's less common now, but ten years ago I used to run into self-taught programmers who were extremely clever and didn't understand any of these ideas, and they were impossible to work with. These days it's more common to meet clever people who only know one language and have no theoretical background because they're self-taught or went to a bootcamp.) Somebody who doesn't have the intellectual curiosity or technical chops to do the work is a slightly less poisonous kind of mediocre than someone who is all over the code like a rabid weasel but doesn't know what git is.

    227:

    All of this tends to fail if you have a property like mine. I live in a terraced house that is over 150 years old. It is made of local sandstone, which is a fairly good shield against microwave radiation, and backs onto a hill.

    The meter is in the rear of the property, mounted on a woodworm-eaten backboard, down near floor level in a small cupboard.

    My electricity supplier's engineers have visited me twice now, noting that the electricity meter backboard will need to be replaced before anything else can be done, and that although a mobile phone signal is detectable a metre or so off the ground (coming in through the windows most likely), there is no detectable signal whatsoever at floor level where the meter is located.

    I have therefore been added onto the back of a very, very long queue of people who need engineering work doing to their electricity installation before a dumb meter must be installed, because a smart one won't work.

    The "smart" meters in this case mean a device designed by the lowest bidder which has only an internal mobile phone antenna, and no wireless networking, RJ-45 ethernet or jack for external antenna provided at all.

    As a result, they would appear to be getting around to things very, very slowly indeed.

    228:

    Quite.

    Note that I didn't say that being clever was the same as being smart :)

    229:
    In the longer term (say, 25 years from now), all signs point to Vernor Vinge's concept of the "software archaeologist" to be justified:

    That future is already here, just not evenly distributed ...

    My job title is "Computational Scientist". I work on the computational aspects of science software (specifically climate and weather prediction). But scientist ?

    Its my task to understand and analyse existing software. I'm staring at 1m lines of a major code, designed in the early 80s but since then organically grown, to understand what it does, how it works and how it will behave on new computers. You would think we know all this (because it was engineered), but no.

    We don't know how the parts will interact (because individual components were written by different people) in the this-trashes-the-cache-of-that manner, we don't know why some algorithms were implemented the way they were (things have changed since this ran on a Cray 1). Re-implementing from scratch is a 10-20 year task (seriously, yes.)

    And yes, I'll take dusty Fortran from 1980 and at times wrap it in a modern interface, add unit tests, documentation and make it work with C++.

    230:

    I've been privileged enough to work with some properly good, so clever it actually scares me programmers, I've seen the difference, and deep down inside I know that I fall into the mediocre programmer category.

    Yeah, me too. I kind of hope I'm a better SF writer than I was a programmer.

    I believe my acknowledgement of my own mediocrity makes me a better, safer programmer than a lot of people who don't known that they're mediocre.

    So congratulations: you're not in the bottom quartile (or equivalent) who are subject to Dunning-Kruger syndrome.

    231:

    The correlation between having a CS degree and being a competent programmer is not high. Quite a few companies will not employ CS graduates as programmers.

    This is as sensible as hiring surgeons who didn't go to medical school or airline pilots who didn't go to flight school. Some CS graduates are poor programmers (I have seen it) but they have at least been exposed to and passed an exam at some point on a broad range of rather important basic material that autodidacts have likely avoided or aren't even aware of.

    Why you don't represent money with floats. Normalizing databases. Big-O notation. Scope. Basic stuff.

    232:

    Money - Exchange rates can be as many as 7SF, and 4 places of decimals between major currencies.

    Database normalisation - Only actually of use if you design relational databases.

    Scope (and visibility) - Surely these rules vary with the programming language(s) you use?

    233:

    Tend to disagree - no Comp Sci degree here. As with most things its the quality of the education that counts not where it was performed. My Database normalisation training came working as a junior programmer on a big Data Warehouse project with more senior (and decent) developers. Given it was our asses on the line for a 3 month deliverable it was a far better learning example than a dusty exercise the lecturer hasn't revised for 20 years.

    I did my assembler programming course/apprenticeship with 1 CompSci Major, half a dozen STEM BSc's and BEng's and a BA in Egyptology.

    The Egyptologist wasnt the most gifted programmer in the room but was one of the most robust and consistent. Important when virtually every untrapped error causes a core dump. One of my colleagues managed to take down half the travel agencies in Italy with one ill-considered release.

    234:

    Money - Exchange rates can be as many as 7SF, and 4 places of decimals between major currencies.

    Money is decimal and floats (or doubles) are binary. Accounting rounding rules are decimal. Representing money values and calculations in software using binary floating point is grief.

    Database normalisation - Only actually of use if you design relational databases.

    Doesn't everyone?

    Scope (and visibility) - Surely these rules vary with the programming language(s) you use?

    And not knowing that can lead to mistakes like the reference PHP code for a RESTful API I was looking at the other week using C scope rules. So it not only didn't work but obviously hadn't been tested. (PHP scope rules are backwards to any sane language.)

    235:

    Integer arithmetic either gives you the correct answer with perfect accuracy (including division, as long as you don't drop the remainder), or else it overflows, which is trivial to detect - for instance by checking the CPU overflow status flag. Arithmetic on floats will silently drop precision and you don't realise until it's too late.

    But I do disagree that only those formally educated in CS will realise this. I can't remember where I picked it up; it might have been programming 8-bit home computers in BASIC that treated all numbers as floats, or it might have been from pocket calculators, but I do know it wasn't something anyone else taught me.

    Agree also on scope; it's so basic to a language that I can't see how you could have any useful ability in that language without having picked up its scope rules.

    When it comes to the more complex concepts such as those John Ohno mentions @ 227, I tend to find that I come up with the idea myself and go for years without realising that the occasional mentions of droogling that I happen across actually refer to the same thing I've been thinking of for years as that algorithm I came up with for the string matching in GENSED. The habit of naming computing concepts by taking a normal word and mangling its meaning beyond all recognition, deriving the name from some in-joke current in the namer's circle that only 10 people understood at the time and now nobody does, or making up some nonsense word like droogling, is a significant factor in this. I take John's point about "harder to work with" because it's obviously harder to communicate without a shared vocabulary, but I don't think it indicates a deficiency in programming ability; I think rather that deriving the algorithm myself gives me a better understanding than reading the section on droogling in a textbook and C&Ping the code or using a library routine.

    236:

    Money is decimal and floats (or doubles) are binary. Accounting rounding rules are decimal. Representing money values and calculations in software using binary floating point is grief. Believe it or not, I've never knowingly programmed in decimal; the closest I've come is COBOL or Ada fixed point. (hint; I'm not involved with programming business software applications)

    Doesn't everyone? Well no. At least not for 25 years.

    And not knowing that can lead to mistakes like the reference PHP code for a RESTful API I was looking at the other week using C scope rules. So it not only didn't work but obviously hadn't been tested. (PHP scope rules are backwards to any sane language.) Which means that someone didn't know PHP scoping rules; it doesn't mean that they don't understand the base concept.

    237:

    "Doesn't everyone?"

    Nope. Only place I ever encounter database stuff at all is in website CMSes, where someone else has designed the database and all I have to do is understand what they've done - if that. And I'm less likely to be doing that than to be in the kind of environment where 128 bytes is a big array and I/O uses RS232 because USB would require an order of magnitude boost in the processing power available.

    "(PHP scope rules are backwards to any sane language.)"

    It's really that that's the problem. I get caught out by that, not because I don't know about it, but because it's arse about face and barmy compared to languages I use more often when most other features are much the same, so I tend to forget about it. But then PHP is a language where you can imagine the designers have gone through the spec and deliberately changed random bits to be barmily opposite to convention, giggling into their sleeves and thinking "he he, this'll catch them out".

    238:

    dusty Fortran from 1980... NOW you're talking!

    ( Hinty: That is about as far as I got, before my employer shafted me, unfortunately .... )

    239:

    "Integer arithmetic ... Arithmetic on floats ..."

    That's also seriously wrong, especially in C. While I have used systems and languages (and implemented a couple) which had reliable error detection, I doubt that you have. This is not the place to describe the deficiencies of the C standards (sic), the IEEE 754 standards (sic) and modern hardware in this respect, but I still could. Thank God, retirement means that I am under no pressure to keep abreast of those.

    240:

    It ain't what you don't know that causes the trouble; it's what you know for sure that ain't so.

    In all of the cases I encountered, they had adopted the policy as a result of finding that CS graduates needed 6-12 months' of reeducation, over and above that needed by most other graduates, to unlearn some of the fallacious dogmas and assumptions they had been taught or picked up. The same applies to medical school graduates in biomedical research, incidentally.

    241:

    Yes, precisely.

    What very few people realise is how peculiar and multifaceted the arithmetics required for finance are. This is why the claim that IEEE 754 decimal helps with financial calculations is just plain wrong. There isn't actually any great difficulty in using binary floating point for the calculations, if you know what you are doing, but damn few people do nowadays. And NONE of that is taught in any CS course that I have heard of.

    And, as far as scoping and related matters is concerned, most CS graduates will claim something that fits into their worldview, and refuse to be educated by someone who actually knows what they are talking about. Not the best ones, of course, but most. A concrete example: Fortran's argument passing model.

    242:

    Know of at least one corp that budgets for new certs/exams for every technical level ... they're the market leader in their industry (coincidence?).

    Have met folk who design cert exams as well as interacted with some academic profs, folk working in cutting-edge type private for- as well as not-for profit concerns who assessed/reviewed the contents. Most interesting is that the very bright ones take such requests seriously and seem to also enjoy providing their very well thought out and usable evaluations and suggestions.

    243:

    From a non-techie ...

    Sounds as though there's tons of tacit knowledge out there, and because it's tacit (undocumented) screw-ups are inevitable. Not a good thing to found an AI culture on: 'What's this line do? ... Dunno ... Who wrote it? ... I forget ... What's the work-around/can we drop it? ... This line, but we've been using it to provide client with our patented such-and-such ... Sh*t!

    I figure that although AI will be designed as an autodidact, its abilities to do so would depend on having thoroughly documented and tested code as its skeleton or tabula rasa to write upon. If the skeleton stage is buggered up, I don't see how an AI could correct itself into 'health' or optimal function.

    Also - looks as though the coding/programming industry is at the stage where there's so much specialized knowledge that it should consider formally splitting up into manageable well-defined specialties with integrated sub-specialties similar to biology, history, engineering, law, etc. (Comments here suggest that a 'good coder' should be able to code anything anytime ... not what you'd hear said about other mature branches of learning.)

    Re: Bill Arnold @ 173

    Thanks! Followed story links to this page which contains IBM's response to a US Gov't RFI re: AI. Very detailed, mostly plain-language and probably of interest to readers of this blog.

    http://research.ibm.com/cognitive-computing/ostp/rfi-response.shtml

    244:

    Y2K: if was overwhelmingly crap and heavily pushed by the consulting firms, to make money. Having worked on mainframes, EBCIDIC does not magically roll over to all zeros....

    Programming. I'm a good programmer, or was - I've been a sysadmin for over 10 years, and nothing else.

    But I wrote stuff that was good. Hell, I wrote a database system in ->basica<- in '84-'85 that was orders of magnitude faster and smaller than the best vendor we could find that would run on a PC.

    Wish I'd gotten around to turning it into C.

    On the other hand, I've seen good, bad, and ugly clever people. Didn't believe in too much typing. I like my manager a lot, and he's good... but I wish he wouldn't use so many regular expressions in his code. HARD to read without study.

    Then there's the too much typing, like the programmer who's code I fixed/enhanced in '88 or so, in COBOL, who didn't understand arrays. Or loops, very well.

    Here's a great way to tell if someone's good: mid-nineties, I was working for a Baby Bell, in a start-up division that went head down. But we had a ton of young, right out of college consultants. About a year after I started, I was talking to one in the evening, before we each left (18:00? 19:00?), and she said that she looked at code she'd written a year before, and that it was crap.

    Her I'd hire any day.

    mark

    245:

    The same applies to medical school graduates in biomedical research, incidentally.

    I was told (by a researcher at a conference last year) that the problem with doctors doing research is that they'd spent so long cramming in knowledge that they had a hard time questioning assumptions (or even noticing that they were making them).

    The usual caveats about anecdote not being data apply.

    246:

    [buffs nails on shirt] I have a good friend who's listed as author of two *Nix commands in the man pages....

    mark

    247:

    Talking of decay ( Warning, I felt sick when I read it, & you might need a bucket. ) THIS And some people wonder why I'm an atheist & don't like the churches in any part of Ireland

    248:

    I do worry that the "most CS Graduates are useless" theme reflects prejudice [1]... I may be biased; I had a grandmother that repeatedly insisted that because I was bright, I must therefore have no common-sense, for this was her worldview. It let her feel better about herself.

    We've been around this particular debate before, when dealing with professional qualifications / memberships. My take is that many graduates (and HR departments) fail to understand that a degree is merely the start - congratulations, you have some education that allows you to begin your training. Pretending that training is not required, is an unfortunate feature of badly-managed firms (that is, many of them) and badly-trained, under-skilled, or overstretched senior engineers (that is, many of them).

    If you're dropped into the deep end at your first job, the risk is that after three years or so, you're firmly into Dunning-Kruger territory of the "hey, I'm an experienced engineer, who are you to tell me to do things differently?". Once this type of person is handed some younger engineers to manage, the cycle begins anew - because they have never been trained or mentored, they don't see the need for it. After all, if they could survive, why can't this lot?

    If you train and mentor those good habits in to the novice software engineer, however, things can be different. Give me a graduate until the age of "degree plus three", and I will give you a good engineer for life...

    The difference between the autodidact / bootcamp type, and the good-CS-course type, is that the CS graduate should at least have some understanding of the concepts beneath. They might not have done much assembler, or parallel processing, but they understand the principles. When I found myself having to write a parser, I was at least able to remember vague notions around BNF. When I found myself trying to debug with a bus analyser, I at least knew what a Program Counter was...

    [1] Although I have been quite depressed at how some people I've interviewed, have lacked even a basic understanding outside the narrow fields of their final-year modules. It reflected rather badly on the course they'd done...

    249:

    IEEE754 decimal is just fine for financial calculations. Shame nothing supports it natively anymore. Note that IEEE754 decimal is not IEEE754 binary.

    250:

    As it happens, the very first system I ever programmed on had integer overflow detection; it went something like this:

    LDA $op1 ADC $op2 BVC .ok JMP .handle_overflow .ok (...rest of code...)

    I know C doesn't allow you to test such a flag (or an emulation if the CPU doesn't have one), but I didn't mention C; I was thinking in more general terms. By "trivial" I meant that regardless of whether any given platform hands it to you or not, the test itself is very simple. Nor can I think of any case where integer arithmetic gives the wrong answer that isn't due to overflow.

    By contrast, with every floating point implementation I've used, to whatever standard it's supposed to conform - whether hardware FPU, software emulation, or whatever a pocket calculator does - it's been obvious that you can never trust the last couple of figures, and in some cases you can't trust any of them; and dealing with this is a whole lot harder than checking for integer overflow. Since if the calculation involves money having any wrong digits in the answer is going to result in somebody chewing your feet, doing it by a means which nigh-on guarantees wrong digits are going to crop up is not sensible.

    Therefore, I consider "don't use floats for money" to be sound advice. I'm not trying to claim that that's all there is to it, but I am saying that if you don't want £1.00 unpredictably changing to £0.99, steering clear of floats - unless you know your system provides a safe implementation as standard, which you nearly always don't - is a good start.

    If that (the bit from "Therefore, I consider..." to "...a good start") is wrong, and we're not just caught in a misunderstanding - I've noticed that you and I quite often get the wrong end of the stick with each other's posts over things that it turns out we actually mostly agree on - I would be glad of an explanation, as it would be useful education.

    251:

    So even if I wanted to be able to turn the lights off from the other room or adjust the stereo volume, I basically can't use wireless stuff? What if I use IR controller into a detector and wired system with no connection to the actual internet?

    252:

    When I say "mediocre programmer", I'm actually talking about several different classes of syndromes. One can be extremely clever yet still be mediocre if one isn't wise enough to know when being clever is foolish (which is almost all the time). Not knowing how to use best-practices like revision control, syntax highlighting, build systems, and automated testing makes one mediocre. Not knowing when to avoid best-practices because they are inappropriate makes one mediocre. Grasping only a handful of languages makes one mediocre. Lacking the necessary theoretical background to understand, say, hashing, sorting algorithms, time and space complexity, or state machines makes one mediocre.

    Nowadays, lacking in the best practices dept makes one subpar, as it makes one significantly harder to work with. On the other hand, the other things you mention didn't come up in the job I've had since graduating uni. Since most of my team were originally SysAdmins, many of them lack the theory on time and space complexity, state machines, or the various sorting algorithms. But on this project, it doesn't matter, because performance comes a distant third to the code provably working (unit + integration tests) and being designed as flexibly as possible (We'll probably be revisiting it later).

    Two things you didn't list that separate mediocre programmers from good ones is ability to handle pointers and concurrent operations. From what I've heard and read from others, the three big stumbling blocks are systematizing rules (learning to code beyond copy+paste), pointers, and threads. One can be a great coder without them, but there will be problems one can't solve.

    253:

    Ah yes, that one. Come across that a few times. "Bright people have no Common Sense" Or, in another iteration: "Academically-trained people are useless at practical operations or work" Both of which are complete bollocks, of course.

    Particularly as Common Sense isn't - common, that is, & even when it is used it often ends up with er, shall I say "Aristotelan Physics" as its working model?

    [ E.G. Recently, I had to go through the just-post "O-level" explanation of why the point of contact of a rolling, non-slipping wheel was, momentarily AT REST, no matter how fast the vehicle was travelling. And, of course why the top of the wheel was travelling at double the speed of the vehicle... Common Sense NOT required, because it is counter-intuitive actually, like one or two other physical realities ]

    254:

    Given the diversity of opinions expressed here, you're going to have to try harder to convince me that Computer Science is a science and not, say, an art based in language.

    255:

    Of course you can. It's just that generally the good systems use an app that talks via wifi to a local controller, which sets the preprogrammed scenes, rather than any direct contact.
    A sample scene might be "close curtains, lower screen, turn on projector, connect AV source". You then have a separate control page for the AV source, or multiple. One house we support has six sky boxes in the rack which are sent over ip to the relevant TVs, with sound controlled separately by the audio controller for the zone the TV is in. Standard remotes work too, because you use converters to send the IR signal to wherever the boxes are.

    The biggest private house I've visited has five full racks of equipment for the building controls in the basement server room, and at least one Crestron panel in every room that can do anything. it gets real complex real quick. It also makes going to the bathroom a challenge unless you already know how the lights work!

    Generally everything only talks inside the network though, so less of an attack surface from the outside world. Alarms and access control are a different side, I don't deal with them much.

    256:

    You really may want to try researching before scoffing. (To actually respond to your attempt at sarcasm: I am actually credited in multiple man pages across various *nix implementations. Usually only where I wrote the man page to go along with the command, filesystem, or library, but I think there are a few that have me listed as a contributor anyway.)

    I'm in an unusual situation, I realize, but claiming TCP is mysterious is dumb. It's like claiming that virtual memory is mysterious, or process scheduling, or filesystem implementations, or security. They're only mysterious to people who don't work on them. And multiple people who work on some or all of those things comment here, including me.

    GUIs confound me. I simply do not understand them, and I don't really even understand how people can readily work with them. That doesn't mean it's mysterious and unknowable; it simply means it's mysterious and unknowable to me.

    Now, any other attempts at subtle insults you want to try?

    257:

    Part of my job a laboratory research coordinator in a teaching hospital lab (actually three teaching hospitals) was making junior doctors' research projects actually work. They have to learn so much to enable them to do their job in medicine that they have little time to acquire knowledge which is not needed day to day. On a related note I was researching interference in plasma uric acid assays because my lab was not performing as well as I expected in external quality assurance programs. I suspected the organisers were spiking their samples with an interference. Since the main source of interference in urate assays is vitamin C I did a big literature search. I found that there were a number of papers in the 1960s and 70s reporting that vitamin C made gout worse in patients. From the 90s papers started to appear saying that vitamin C supplementation made gout better. In the 80s labs began to change form phosphotungstate methods which had positive interference by vitC in urate assays to enzymatic methods in which vitC interferes negatively. Had the authors of those papers consulted their labs they would have not written the papers.

    258:

    I expect IoT by the major players like Microsoft, Amazon, Google and Apple to be secure in design and implementation and with long-term support for security updates.

    Apple's home automation system HomeKit, for example, requires manufacturers seeking HomeKit certification to build their devices with an Apple-supplied authentication/encryption chip. I assume this uses Apple's 'secure enclave' hardware, and that Apple will provide free software updates for security for device lifetimes. Similarly for the others.

    Outside of the big players I expect massive fail.

    259:

    The term "wireless" alone is not distinctive enough. There are two main classes under consideration, only one of which is relevant to this discussion, and bits of confusion are creeping in here and there due to failure to make the distinction.

    The relevant class is high-bit-rate systems like 802.11 and bluetooth. These need a pretty decent processor just to make them work at all, so just having that interface means the device is probably vulnerable.

    The irrelevant class is low-bit-rate systems which are basically the same as an IR remote control only with a UHF transmitter or receiver in place of an IR LED or photodiode. These don't need much more than a plain shift register to decode or encode, so a system that uses them may have no processor at all.

    Consider a touch-controlled lighting dimmer, which has no processor - just a so-called "digital potentiometer" chip, essentially a nonvolatile counter controlling a switched resistor network (similar devices can also be used for volume controls). Adding a remote control facility to this is simply a matter of taking the decoder chip for a low-bit-rate system and wiring its outputs to the inputs of the digital pot. The input to the decoder chip comes from an IR detector or a UHF receiver depending on whether you want it to work through walls or not. No processor, no programming, just putting the Lego together.

    IR range extenders are even simpler - just an IR detector in one room driving an LED in another. Or driving a UHF transmitter and the receiver drives the LED. They are pure repeaters, and about as hackable as a tin can telephone.

    Of course the systems that Mayhem describes are nothing like this. But I would submit that people who are into the idea of having six racks of kit to make it difficult to go to the loo at night are something of a minority :)

    260:

    "GUIs confound me."

    If they don't, you haven't even begun to understand them. All modern ones that I know of are based on a programming model that is known to be completely broken, which is why even the most mature ones are still prone to unpredictable and often incomprehensible misbehaviour. Yes, I have worked on them - and, after one experience, I swore "never again!" I accept that part of that was because the X Windowing System was written by unsupervised MIT graduate students, and the detailed design and code quality of X11.2/3 was CRAP (and, of course, almost entirely untested) and the documentation lacked the contact with reality of the Laundry series, but Motif was actually worse. However, the fact remains that there is no way that such a broken design CAN be made reliable.

    I fully agree that TCP/IP is not even remotely comparable, though one could say that trying to get reliability together with UDP is; they are a relatively simple and clean design, which makes both their use and implementation fairly simple. As you will gather, my remarks about TCP/IP refer to the error recovery logic, which is known to have serious flaws (and was when TCP/IP was designed). But you have to get to that level of subtlety before the flaws in TCP/IP show up, and even then they are extremely rare. If you are interested, please ask me for a description of the failure signature I referred to.

    However, I disagree with some of your other examples. Process scheduling has been a serious nightmare for as long as I have been involved with it (over 40 years), because there just ISN'T a right way to do it that meets all of the requirements. And security is an absolute disaster, because it has been bolted on to all current mainstream systems, rather than being designed in, and the IT industry has had 35+ years of almost invariably choosing 'functionality' over constraints.

    261:

    It's not - it's bitter experience, and shared by a surprising number of the best computer scientists. And I speak as someone who had a foot inside computer science, and who has taught some. The reason is that you simply cannot teach a practical engineering discipline without (a) taking longer than a degree course and (b) including a lot of supervised practical work. The problem is that too many learn enough to be dangerous, but not enough to be effective, and suffer from the delusion that what they were taught was fully correct and complete.

    To Mike Collins and Robert Prior, precisely. My wife is a biomedical researcher, and has had to supervise doctors attempting to get PhDs.

    262:

    Definitely another form of bad programming; not understanding how users are going to interact with your code....

    263:

    Speaking as an amateur programmer I agree with you completely. Know your problem domain, be humble, try to learn something about best practices, try to be secure and properly restrict input... and my own addition; understand that humans use software badly and make mistakes. Assume pebkac.

    264:

    claiming TCP is mysterious is dumb. It's like claiming that virtual memory is mysterious

    Great. So can you explain why when I send TCP packets from Sydney to London the larger packets often arrive sooner than the smaller ones? The threshold changes but is generally 100-150 bytes. It's mysterious to me. I'd like a fix, obviously, but even just knowing exactly why it happens would be a start. Especially since I don't see the same effect with UDP (or at least much less often).

    I guess with the similarity above you're talking about them both being conceptually simple but somewhat subtle to implement? Also similar in that I wouldn't want to have to write code to implement either in a production system. But I'm guessing you also know multiple people who have written VM systems.

    This discussion is also making me want to go and work in a team with smart people again, though, which is probably a good thing. Working in a small company where there's only one person doing any given thing isn't very good for learning stuff.

    265:

    Given the diversity of opinions expressed here, you're going to have to try harder to convince me that Computer Science is a science and not, say, an art based in language.

    It's about finding the level of detail that satisfies the human need to argue about stuff. If there were, for example, evolutionary biologists in here arguing about that you might think there was no modern synthesis at all and so on.

    266:

    (intent of the user errors) Definitely another form of bad programming; not understanding how users are going to interact with your code....

    There's a whole field of user interaction design. And it's important to note that users often have no idea how they want to interact with your code, or even how they have just interacted with it. Or what they want out of the interaction beyond "it should work". Writing code to help those users is challenging. Luckily my "users" are other programmers, which does make them a little less vague and unreliable. At least most of them, most of the time.

    267:

    the idea of having six racks of kit to make it difficult to go to the loo at night

    I'm working on that... at the moment it's about power really. Watershed is where it gets easier and cheaper to run 5 Raspberry Pis than it is to run a big PC with 5 VMs. The stumbling block is the big(ish) z-raid I built into the PC a few years ago... individual disk sizes haven't quite hit it yet.

    268:

    I've seen them and chemist discussing things; it all settled down pretty quickly as people worked out what the other one meant. The communications here are less clear.

    269:

    Y2K: if was overwhelmingly crap and heavily pushed by the consulting firms, to make money. Having worked on mainframes, EBCIDIC does not magically roll over to all zeros....

    I call bollocks on that one.

    There was (and almost certainly some of it still exists) a vast amount of commercial code that used 2-digit years, mostly business applicatione written in COBOL for IBM mainframes[1] - because in the days when central storage was hand-knitted from ferrite rings and copper wire it was bloody expensive, and wasting an extra byte to store the century (which wouldn't change for 20+ years) was considered bad practice[2].

    If all your business applications break because $Current Date (YYDDDF) is less than $Previous Date (YYDDDF) you have a problem. Likewise with the "quick fix' Century Indicator, that gives time/date format 0CYYDDDF where C=0 for 1900 dates and 1 for Y2K, leading to dates appearing as 1st January 19100, etc.

    It was a serious problem (especially if you wanted the economy to still function and your money to be accessible after your hangover cleared up on 20000102 or near offer), and a very large number of people spent several years rewriting[3], circumventing[4] or just plain bodging[5] systems to ensure it wasn't going to be an utter disaster. Some things still broke[6] but nothing critical.

    I still have my 'Deathwatch' T-shirt[7] and occasionally wear it.

    Chris.

    [1] using 'Packed Decimal' arithmetic - 4 bits per digit and a sign in the last 4 bits. [2] One exception: part of the oil industry, which routinely had 500 or even 999 year leases, so got it right to begin with. [3] Making all date fields 4 digits and sometimes implementing the 'divisible by 400' rule for leap years. [4] Making your date routines use a 'sliding window' so <50 = 20, >50 = 19 prefix, and making a note to review the window regularly. (One of my fixes was to automate this based on the actual 4-digit year from TIME DEC +/-50). [5] Bodge the date routine used by 90% of the company's code to take 28 years off the date (to keep day of week in step); during the Industrial Fortnight, shut all the applications down and convert every single date in every single file affected to its Y-28 value. In eleven years time they will have it all to do again. [6] One little script got overlooked and failed because the generated filename was invalid - the 19100 problem. Quickly fixed on the night and not business critical anyway. [7] Designed and produced by a couple ove good friends. They asked if I knew of any more eligible[8] people who would like yo buy one (so as to reach the next price break for orders) and I posted it to IBMLINK... resulting in a "What did you DO?" email due to a flood of orders from the USA and elsewhere. [8] Restricted to people actually working during the 99/00 switchover.

    270:

    I spent a good half hour from around 0030 on January 1st, 2000 calling all the vendor field techs whose numbers I had stored in my phone (not many) to wish them a happy new year. Not one turned out to be working (I woke one up). I was at a party myself, but taking time to sit outside and relax (because Queensland).

    271:

    As it is, programming is definitely a language-based form of art.

    A peer-reviewable Computer Science would be almost indistinguishable from the combination of Physics and Mathematics. It would still be a bit different though, as those sciences aren't constantly facing intelligent adversaries.

    272:

    The internet is nothing but pipes. As the Romans found, pipes and water systems are great. Trouble is, there is no easy way to distinguish the clean water from sewage. And would you trust governments to be the filter?

    I think, in the end all traffic will want to be authenticated as to source. Users will want to choose their state of access (for themselves or devices,) and choose some form of content moderation of their choice.

    Kind of like DNS one level up based on use and user.

    273:

    If I don't see a link, you won't make me think.

    274:

    Depends on what you think Science is and what it's for. I think you're conflating some quite different kinds of activity.

    Technology is always artifice, and that's possibly what you mean by art here. It's applied science which is just a way of saying it is taking our newfangled knowledge and using it to create something useful. From a technology standpoint you'd have to see it as a branch of engineering, and people with a pure Science or Arts background usually do. That's missing something though.

    You can go and study Computer Science in a Science (or Engineering) faculty, or you can do some kind of ICT degree in a business faculty. There might not be a clean distinction everywhere, but the point is that most people in ICT have business degrees and see ICT as a business discipline rather than an Engineering one.

    Science, IMH*O, is about pushing back the boundary of dark outside the widening circle of light. When it turns out there is an isomorphism between information theory and thermodynamics, that is interesting and it means we're discovering more truth about the universe. Sure, some of what we discover is probably going to be about what it's possible to model using the techniques we've developed as part of the business and engineering, even linguistic aspects of ICT. Some discoveries depend on economic and social change to become possible - for instance applying big data techniques to medical research based on access to massive EMRs as per the scenario in The Rhesus Factor.

    • Really
    275:

    This. Although for every COBOL program storing dates as two digits there were at least two dozen RPG/RPG II/RPG III applications on (usually) IBM minis doing the same thing. Disk space was expensive; I vividly recall the day the IBM CE came in to the service bureau where I was a programmer trainee to upgrade the IBM S/34's hard disk from 20 MB to... 64 MB, maybe? Details, details...

    IBM's solution was to store dates as CYYMMDD where C was 0 for 1900s and 1 for 2000s. The secret was that EBCDIC has a packed decimal data type which can store 7 digits in 4 bytes (as decimal, not binary.) Most of the apps RPG coded for were general business applications where currency was used a lot, so using decimal arithmetic was a real time-saver.

    Mike

    276:

    Heh. Starting in 1999, I was taught COBOL, CICS & JCL so that I could help with the great Y2K problem.

    Of course, the joke was on my employer-- by the time I got up to speed, things had already been (mostly) fixed, and the market for COBOL programmers wasn't too robust for a while afterwards, to say the least. Layoffs followed.

    To my amusement, about four months ago (yes, in 2016) I got a call from a recruiter asking about the state of my COBOL knowledge. My first thought was: "You must really be scraping the barrel if you're calling me, who only had a few months of experience 16 years ago!"

    Of course, I'm probably one of the few people who has actual COBOL experience who is less than 50 years old.

    Daniel Duffy @182: I know that Charlie follows Oglaf, but for the rest of you, this bit brings the golem to mind. That cartoon is SFW, but large chunks of the Oglaf archive are very much NSFW.

    And for those who live with cats and travel a lot, this might amuse.

    277:

    Reminds me of the 2001 "Pigs Ear" beer-festival T-shirt, which included a "piglet" ( as in Pooh ) lifting up a Mumblebum Doom, sorry "Millenium Dome", with lots of "bugs" (i.e. multi-legged IC widgets ) running away from underneath, as well as many other Y2K details....

    278:

    You DO REALISE that the reason Norwegian Forest Cats can climb so well ... is that their "thumb" claws are almost opposed - they are certainly further "Out & round" than is usual ... Now, if you then re-crossed them with either Birmans Siamese or Burmese, you could be in real shit

    279:
    Cars will almost certainly be the first examples of this : Self driving cars don't actually need very much of their computational innards to be exposed to the internet at all -Map updates, and traffic reports at most, and even for those, sane design has it trusting it's sensors over the map, and it is entirely possible - and perhaps the best design choice, to just not give self-driving cars any network capability whatsoever, and only update maps with physical dongles. Not USB standard.

    Pretty much all self-driving car scenarios posit that you'll be able to summon it from your phone and/or from a central dispatch office... Traffic reports (and temporary road closure info) are important for preventing traffic jams. Software updates. Network capability is just too useful to give up.

    Besides, air-gapping in nice in principle, but rarely seems to work in practice — the occasional reports of even a military not having the discipline for it, or malware jumping the air-gap on engineers' thumb drives...

    280:

    Money is decimal

    Hold it right there!

    American money is denominated in base-10 plus two significant digits, but this isn't some kind of law of nature.

    British currency, prior to 1971, was: pounds/shillings/pence, 12 pennies per shilling, 20 shillings per pound, for 240 pennies per pound. But smaller coinage was used, down to the quarter-penny, and with happy fun coins like the sixpence and thruppence (three pence) in use quite late in the day. Indeed, IIRC British mainframes of the 1950s and 1960s had financial ALUs designed specifically to operate on British currency. More old British money geekery here.

    Today most currencies are decimal based, but there are exceptions and it's not a law of nature and if you want to write a global currency trading system you are going to get a nasty shock when you hit Mauritania, Malta, and Madagascar if you assume base-10 is universal, or if you try to apply it to historical currency figures (especially before the 19th century — Napoleon standardized a lot of things, including decimal currencies throughout Europe and colonial dominions).

    281:
    The other limitations (membership of only one "family" at a time / time delays when switching between families) [...] So; just invite everyone into a big family.

    Simple, common scenario: a child with shared custody between two parents, who divorced precisely because the "one family" thing wasn't working for them...?

    282:

    I accept that part of that was because the X Windowing System was written by unsupervised MIT graduate students

    Oh God (and by God, I mean Cthulhu), you're giving me flashbacks to a course I took in 1992, on application development for Motif/Xlib on SCO OpenDesktop. Using SCO's bastardized version of MS C. And of course this predated syntax colourising editors on that platform, never mind visual UI design tools that could spit out an app framework.

    The horror, the horror ...

    (There is a reason why SCO turned to Tcl/Tk when they wanted to write a graphical system administration UI for OpenServer. Then cocked it up completely with a spurious requirement to deliver the UI via serial terminals, so chucked Tk out of the window and rolled their own widget server that could draw Motif widgets and a CURSES-based equivalent.)

    ((And now you owe me a pint for dredging up those memories.))

    283:

    So can you explain why when I send TCP packets from Sydney to London the larger packets often arrive sooner than the smaller ones? The threshold changes but is generally 100-150 bytes.

    This is just a random, seagull's-eye guess, but ... what are the chances that someone at the ISP policy level told whoever is in charge of their routers to prioritize video traffic? On the assumption that it generates far more howls of rage from the end users when a video frame is delivered late (you get freezes and jitters that interrupt the Superbowl sudden death penalty time or whatever) than if an ICMP packet or an HTTP error response is delayed?

    284:

    Yes, precisely. But, by then, it should have been X11.4, which was just about usable once you discovered the bits that actually worked. The design was no less broken, and already increasing in complication, of course ....

    I could explain the background to the GUI/test hybrid lunacy, if anyone were interested, as I was peripherally involved. It was a demented decision to do with IBM's (mainframe) management and the purpose for which they originally intended to use the IBM PC - Presentation Manager, CUI and all that.

    I accept the pint duty - even after 30 years, I need something to take away the taste myself!

    285:

    Yes and no. I don't think that we are disagreeing, so much as judging differently.

    The problem was real, and very, very serious - in the 1970s and even 1980s. But, by the 1990s, most mainframe software had been fixed, kludged, or whatever - often by treating years 00-59 as 2000-2049 - and, as you say, most of the remaining bugs were dealt with in the last years of the century by programmers working overtime, and using a huge variety of revolting hacks.

    The panic WAS as whitroth said. I was managing a multi-million quid supercomputer and was taken to task by self-proclaimed experts for not doing a preliminary test. My response was that I had done an analysis, and estimated the time lost to do the test to be several times that of fixing any remaining bugs afterwards. As it was, I had to fix one bug (in one of my own scripts!) and rerun one month's accounting analysis. Piffling. That was an easy case, but there was just me, to do everything.

    286:

    That's a possibility. The other aspect, which is a LOT nastier that most people realise, is multi-path transport over a complex and dynamic network. When all that works, it helps with both RAS and performance. But the TCP/IP/UDP performance and recovery strategy weren't designed for such a complex transport system, which leads to some truly weird effects. I have found that the old, old way to trick batch schedulers into assigning interactive priority, by killing and restarting transfers, works at least some of the time with TCP transfers.

    As a related example, it is easy to see how echoes can occur in an electromechanical telephone network - but in an entirely digital one? Yet they do.

    287:

    dpb reported: "A doctor friend of mine keeps telling me that death is actually one of the harder things to diagnose. I find that a a bit concerning really."

    Your friend was exaggerating. Like disease, there are different types and degrees of death, some of which are easier to diagnose than others. Head missing? Dead, with the footnote that the head might still be alive in a jar somewhere. Vaporized by pulse cannon, leaving only a few grams of ash? Really most sincerely dead. Superhero and dead? Surely you jest. Abducted by Great Elder Ones? You wish you were dead. Lying immobile and not breathing? Hmmm... No pulse and not breathing (the two tend to go together)? Definite maybe. G

    WRT the discussion of programming and complexity, I wanted to clarify that I'm sufficiently familiar with complex systems to know that it's not necessarily possible to completely bug-proof such systems (cf emergent behavior). My gripe is with the failure to take obvious and necessary programming steps to minimize the risk of whole classes of known errors. For just a few examples of things that could have been easily programmed into the software ab initio to eliminate a whole class of errors:

    Statistical testing software used to let you run any test on any dataset, whether or not the dataset met the criteria (e.g., normally distributed data) for that test. I don't know whether this has been broadly fixed, but I suspect not from some things I've read.

    Pascal programming editors required you to type an end-of-line character (a semicolon, if memory serves), instead of adding this automatically each time you began a new line. Much fun was had proofreading code for missing semicolons. Back when dinosaurs and Fortran ruled the Earth, this was (to some extent) due to limitations on technology. Today, there's no excuse for such things.

    Memory overflow errors, most of which could be caught simply by validating all input before passing it on to processing. (As is done in pipelining inside chips -- my apologies if I've misused the technical term... not as up on the terminology as I used to be.) For example, if the software is expecting a 32-bit input and instead receives 64 bits, the software should either crash, throw up an exception and ask for the user to decide what to do, follow an algorithm to choose a solution, or truncate the input and let fate take its course -- i.e., crash or misbehave badly. Some of these solutions are easier and more useful than others. Simply crashing is arguably the simplest and safest option if you don't care about the user's data.

    Outdated code that contains known bugs or vulnerabilities, like in a recent study of javascript (http://www.zdnet.com/article/an-insecure-mess-how-flawed-javascript-is-turning-web-into-a-hackers-playground/)? It's easy to build in an "on bootup, check version and notify the maintenance guy if an update is needed" line. That's not to say the code should automatically update itself; that can break so many interconnected things these days that the mind reels. And, of course, it's likely that nobody's been designated to fill the maintenance guy role or that you have Homer Simpson at the support desk. But at least the problem can be foregrounded so someone can decide how to act on it.

    As I noted previously, there are best practices that simply aren't being followed -- including listening to programmers and giving them time to fix things before demanding that they build fancy new features that only Marketing wants. We need to develop a programming and engineering culture in which these practices are ingrained in everyone's way of thinking -- and in which software/hardware engineers have the same responsibility as professional engineers to say "this is a bad idea, and I won't sign off on it"*.

    • Yes, I'm dreaming in technicolor. Sigh.
    288:

    "Statistical testing software ..." It helps only with simple cases, for two reasons: firstly, it generates only data within the imagination domain of the designer of the tests; and, secondly, many or most really nasty problems have a very low probability of occurring even when all conditions are met.

    But, as you say, there's no time to do it right, but always time to do it over :-(

    289:
    OpenServer. Then cocked it up completely with a spurious requirement to deliver the UI via serial terminals, so chucked Tk out of the window and rolled their own widget server that could draw Motif widgets and a CURSES-based equivalent.

    It was far worse than that, if the programmer wanted to be creative.

    One of my first jobs out of college was rewriting a piece of job security code, written and "maintained" by a programmer based in London over the phone (the company, and system, were based in Dublin as was I). It was said Unixware, putting its widgets up on a Windows (3.1?) client screens via serial protocol. It was wrapped around a database on the Unixware system. Written in a weird version of Basic.

    There was an API you could use, but the original programmer wanted security, so he wanted none of that. Instead of calling YesNoDialog(), he drew the widgets via the serial protocol, using Hex in print statements.

    Now the application itself was about 12,000 lines of code in about 300 files, with about 200 lines of comments all told, but mostly they were for suckers and must be ignored. Each file contained subroutines with names like "3.4" and "4.5", and the filenames were also "3.4" and "4.5" but they nearly matched up but didn't, another land mine to be stepped over. The reason for the numerics was that he used computed gosubs, you see.

    Careful investigation realised that "3.4" corresponded to the code executed when you selected item 4 on menu 3 on the GUI. Mostly. Some numbers would refer to graphical elements in the menus like separators, so sub3.3 was free to be used by something important (mustn't waste the namespace, you see).

    This being Basic wrapped around a database, all variables were global. In fact nearly all variables were in an array in the database v(123). With numeric indices throughout, of course.

    In fact there was a special array english() in which all text strings were kept. It looked like, but of course wasn't, a way of making the application multilingual in the future.

    Now the observant will realise that with no function names, no text strings, and variable names of filenames, it was possible to look at this code and have no clue whatever what it did, without serious disassembly by hand.

    We never did manage to fire the programmer who wrote it.

    290:

    We never did manage to fire the programmer who wrote it.

    Could that be one reason for writing it in such a convoluted manner?

    291:

    I sort of don't get the scare.

    As far as I am aware, these things need Internet connection which is not free, someone's got to pay for it and someone has to setup connection (entering usernames and passwords)

    And I am definitely not going to connect my fridge/printer/toothbrush/whatever to my home WiFi. Surely these things can't get connected to my WiFi themselves?

    Am I missing something?

    292:

    Off-topic. Charlie tweeted About 30% of tweetstorms would Go Away if English had a verb for "to understand someone without agreeing/sympathizing with them".

    My thought is 'Got', as in "I got ya...", or "Oh, I got what you mean..."

    Meanwhile, my mother had a stroke two weeks ago*, but is recovering well, and now home. Only outward sign is some difficulty with the left hand, and her vision is more crap than it was. Hemianopsia—what's the opposite of FTW? Further meanwhile, the Sister-in-law's mother got back in the US with no problem, said it was the easiest time she's had. It was right before they announced that Iraq was being dropped from the Ban list.

    *the main reason I've been scarce here, that and not having anything to add to the recent converations anyhow.

    293:

    "Am I missing something?"

    Several things, actually.

    "Surely these things can't get connected to my WiFi themselves?"

    Yes, they can and, yes, they do. And some of them will refuse to work unless you let them. I used to allow only known MAC addresses on, but it was tricky to do; I had to stop when I ran out of slots. I am planning to upgrade my router, and would like the ability to control which devices can talk to the outside world, and how - and preferably without having to buy a datacentre-level router (or build my own) and relearn how to use iptables or whatever. But I am not sure that is possible :-(

    294:

    At least SCO got what they deserved in the end (if you ignore the fact that they got bought by Caldera, etc.) I was a big deal at Groklaw under my real name for a couple years and did my bit to drag them down!

    I do know of at least one very large business (I've got one in my town and you have one in yours too) that still uses SCO Unix on a daily basis - what a horrorshow that is!

    295:
    Could that be one reason for writing it in such a convoluted manner?

    Absolutely. The code was originally written on contract and then the contract extended to fix bugs. I was explicitly hired to make it possible to fire him. My (new) boss sadly agreed we had to re-write the code.

    We had two copies of the code: one in production, one on our devel server. I didn't tell him I made a copy on our secret, new-fangled subversion server. He was dialling in at night (yes, pre-internet. Yes, we were paying international phone rates). Making edits to both copies. Different edits.

    296:

    I'll bet he had a "sane" copy somewhere, plus a script for turning the "sane" copy into something unreadable.

    297:

    With so much connectivity going on, who sets the priorities for communications using what criteria? And for how long does any one user have first-dibs?

    EC's comment that visual might be given higher priority over text signals made me think: good, so as long as a large enough number of people are watching cat videos every hour of the day, the planet is safe from DT texting 'launch missiles' the next time he sees/hears something he doesn't like.

    Comments here make me wonder if independent unconnected parallel systems are the next step. More infrastructure but maybe safer overall esp. if new 'common good' systems (hospitals, traffic control, police, fire, etc.) are designed for security first. Hospitals because some parts of the world have started using telemedicine including performing surgery long-distance which relies on as close to real-time and uninterrupted communications as possible.

    OOC, do all programming languages round numbers the same way?

    298:

    Pascal programming editors required you to type an end-of-line character (a semicolon, if memory serves)

    It's thirty years since I wrote any Pascal, so forgive me if my memory is failing....

    But ISTR a subtlety; in that C/C++ regards the semicolon as a way of terminating a statement, but that Pascal regards it as a way of separating statements within a scope (i.e. you should always have one semicolon fewer than you have statements).

    So in Pascal, it's important to remember not to put a semicolon after the last statement. Which causes fun when you add further lines to a function body...

    Meanwhile, Tcl. Just, no. The delights of using an interpreted language within production code? What a wonderful idea, nothing could possibly go wrong... (You can't tell I had to suffer it for several years, can you)

    299:
    And some of them will refuse to work unless you let them.

    What's the betting that the ones that are most stubborn about "needing" internet access are also the ones that least require it. Like mouse software that refuses to work without a net connection. It's even stupider than DRM, where you could at least say it's checking for a license.

    I don't know if there are yet people taking a pair of devices that should need no internet to function and carefully making all the messages hairpin through the vendor's servers, probably due to management demand that there must be internet features. Probably, though.

    Bonus points if same management decides to start mining/selling the data in order to pay for those servers. I wouldn't even be surprised to find "desktop" 1 that are not only a webapp in a UI-less browse

    300:

    We have a saying round here, "he who pays the piper calls the tune."

    Certainly multiple parallel independent systemss would be more sensible in an ultimate safety way, but they won't happen due to increased cost and less ease of use in some ways.

    301:

    Last time I checked -- long time ago so it's not necessarily still the case -- long-haul was still done using ATM, which at 64 bytes per packet doesn't map well to IP, resulting in a series of decisions about how to embed and reassemble. And then it got done via hardware more and more, which changes the timing -- it's more cost-effective to have hardware reassemble and checksum large packets (because there's an overhead) than small packets.

    Traffic-shaping combined with encapsulation combined with limited bandwidth combined with hundreds of switches involved in the route make things complicated.

    302:

    So in Pascal, it's important to remember not to put a semicolon after the last statement.

    I mostly remember these days that putting a comma after the last item in Javascript arrays and object properties works for everyone except IE users...

    303:

    Meanwhile, Tcl. Just, no. The delights of using an interpreted language within production code? What a wonderful idea, nothing could possibly go wrong... (You can't tell I had to suffer it for several years, can you)

    While I haven't written that much TCL myself, I once worked for quite some time as a maintainer for a system which had two separate TCL interpreters. One was for the GUI, one for the comman-line interface. They implemented almost the same functions, but not quite, and were subtly different. We didn't manage to get management approval for combining the two, partly because there were more pressing matters.

    My last job in that project was to teach the system to a yet another group of developers. I think they were the fifth team to work on it.

    304:

    In SCO's defense (this was early-1990s SCO, not the ambulance-chasing patent litigation zombie who bought the name later in the decade), the widget server idea made a lot of sense from the perspective of their pre-existing installation base circa 1991, when the requirements were issued: it would allow the admin GUI to be fired up successfully on the console in event of X11 being unavailable, for example. But by picking Tcl and then abandoning the standard widget toolkit they turned their back on any hope of being able to leverage the open source Tcl/Tk community, and also forced themselves to reinvent the wheel, and a whole bunch of other mistakes. CURSES over a serial tty was very much an edge case by then, and by the mid-to-late 90s nobody installing a new system was going to be planning to admin it that way. Like the requirement to support diskless workstations mounting everything over NFS, it got tacked on in the early 90s and was obsolescent by the time the product shipped, three years late.

    (OpenSwerver was not the same thing as Unixware, and this toolkit most certainly wasn't implemented in some variant of Basic. Hmm. Was this by any chance an Applixware thing? ISTR it had a basic-like macro language, and it ran on Linux and UNIX-on-Intel back in the mid-90s ...)

    305:

    Authenticating the source is already an issue. Wikipedia is vulnerable, as are all media sources, the latest fifteen years and more are full of examples.

    306:

    Back when I was there, SCO was a real UNIX company. Their problem was, every desktop-plus-GUI they sold forced them to fork out about $200-250 in royalties to IP owners. They'd started with Xenix (UNIX System 7) which Microsoft gave them in return for a 30% stake, then bought an unlimited SVR3.2 license back in the day and grew fat and happy on it. But they couldn't buy an SVR4 license for less than about one year's gross turnover, so they were locked into white-room-cloning a festering shitpile of stuff nobody really wanted, just to maintain feature-compatability with early Solaris and various standards the pinstripe customers insisted upon.

    There was a window of time circa 1994-96 when SCO could have grabbed ahold of the Linux kernel, plonked their own userspace on top of it, added the existing a.out binary compatability shim that was floating around to allow SCO UNIX legacy binaries to run on Linux, and declared SCO LINUX to be the future. They'd have eaten Red Hat's lunch, and cut their licensing fees by over 90%.

    But they were just a little bit too corporate and buttoned-down by then, and the C-suite types didn't want to hear about this insurgent open source nonsense (that half their engineers were moonlighting working on). Hence the subsequent decline and fall.

    307:

    We're past the 300 comment mark, so it's SF/F PSA time: Hugo nominations will close eod March 17 2017 PDT.

    Charlie, how about a recap/list of your works that qualify for this year?

    And, there's a new category being trialled this year: Series.

    308:

    Well, yes, but I am pretty certain that they were simply following IBM, which had made the same mistake. I tried telling the relevant people in IBM that it was a mistake, but .... SCO's Tcl etc. mistakes were entirely their own, of course.

    Re 307: yes. There was also a brief window where AT&T could have done that, only more so, but they took the same path for the same reasons.

    309:

    My take on the family database problem:

    https://gist.github.com/dhasenan/ab20eb46df885186aad7d87b253d3ca6

    It encodes the relationships you talked about and supports most of what you asked for. Not exactly minimal, granted. Removing the possibility of sharing with dead people isn't terribly difficult (it's one more join away).

    The huge problem? I'm not going to tell Netflix who I'm married to, who my kids are, etc. At most, I'm going to make an account for each of us and tell Netflix that this account is my family member. Or I'll make a role for each of us within this one account.

    The other huge problem? I can create a shared account and hand out credentials to my friends and family.

    310:

    The executives had problems with understanding open source business models. (Remember, I left in '91 to go to Cygnus.)

    311:

    "Am I missing something?"

    Probably the somewhat poorly articulated and definitely weird assumptions running behind this discussion.

    "And I am definitely not going to connect my fridge/printer/toothbrush/whatever to my home WiFi. Surely these things can't get connected to my WiFi themselves?"

    Well, the printer is something of an odd one out in that list, as the chances are that that does have a wireless networking interface.

    But your fridge doesn't. To be sure, fridges that do have it do exist, but people seem to be making a bizarre leap from that fact to a conclusion that all fridges either have it already or soon will. For myself, I've never seen one, I've never seen one on sale (either in a shop or online), I'm not aware of any real people who own one, and I only even know they exist because of articles like this. The manufacturing costs of domestic goods are pared to the bone, and the manufacturers aren't going to spend the extra to build in a wireless network interface if people aren't going to buy it. Which they aren't, except in very small numbers, because who the hell wants one?

    And your toothbrush, of course, doesn't have one because it's just a chunk of plastic with hairs on one end and doesn't even use electricity.

    Having so much wireless kit that you run out of MAC addresses or need several racks of gear to accompany it is not normal. It is the result of spending a large amount of money on lots of pieces of equipment that only appeal to people of a rather specific subtype of the spoddy bent. It appears that such people also have an inclination to assume that their narrow minority interest is nearly universally shared, which it simply isn't. People whose spoddiness lies in other directions or who just lack spoddiness are not interested in this stuff and don't bother buying it because what's the point?

    312:

    Works of mine that qualify for the Hugos this year:

  • "The Nightmare Stacks" (best novel)

  • The Laundry Files as a whole qualifies for the experimental new Hugo for best book series. But so do a shitload of other fine series works, most of which will be nominated, causing something of a stampede. The Laundry Files will still be in the running next year, assuming the best series Hugo becomes a permanent fixture, and I'm in no hurry. Besides, I think it'll have a better track record next year when "The Delirium Brief" is visibly part of the continuity.

  • This blog probably qualifies for the shortlist under best fanzine, but who cares?

  • Frankly, I'd be very surprised to see my name on the Hugo shortlists this year. Next year ... maybe, there'll be two novels out by then ("Delirium Brief" and "Empire Games"). But 2019 will be the big year. Worldcon in Dublin (home mover advantage for those of us from the UK and Ireland), I'll hopefully have some short fiction out (including a Laundry novella), the Empire Games trilogy will all be in print and therefore visible and just maybe the Merchant Princes series as a whole (EG included) could make the series shortlist ... and y'all will have had a chance by then to read and decide whether or not to nominate "Ghost Engine", i.e. the first wholly original new thing I've written in a decade.

    313:

    Dang, I don't remember you being there! On the other hand, I was in Watford and you've have been in Santa Cruz ...

    314:

    "I got ya" still implies approval to me. Perhaps it is time to invent a new horrible internet acronym. UBDA, perhaps, which is nicely pronounceable, standing for "Understand but don't agree".

    315:

    Just reading the URL was enough for me to know what it was about: those "religious" institutions in Ireland which differ from the Nazis principally in that they operate on a smaller scale.

    What it makes me wonder is how they can claim to be devout adherents of a religion when the Holy Book's records of his words and deeds make it quite clear that the founder of the religion would have found their practices just as abhorrent as any other normal person does.

    316:

    "OOC, do all programming languages round numbers the same way?"

    No. Nor do all compilers, all hardware, all compilers for the same language on the same hardware, or even different options for the same compiler for the same language on the same hardware.

    317:

    "Having so much wireless kit that you run out of MAC addresses ..."

    My router has a limit of 5 - and that's 'registered' ones, not just at any one time. And its wireless bandwidth is about 500 Kbits/sec. You might like to rethink your assumptions :-)

    318:

    I will happily admit that I had assumed all routers would be similar to the ones I've had - ie. give you a big enough list that it goes off the screen, even though I've only ever selected them on the basis of being cheap (or free), and being able to put them in bridge mode and turn everything else (including the wireless) off (hopefully). :)

    319:

    I'm familiar with SCO's history. You couldn't be a big wheel at Groklaw without knowing this stuff, plus I did a big email interview with Tigran Aivazian about the work he did on the Linux kernel, which was, BTW, approved by his manager, who was named Wendy if memory serves. You can probably find the research I did at Groklaw about his contributions easily enough if you're so inclined.

    But it.s amazing how many companies died because management Didn't Get It.

    320:

    Did you ever use any KY2K Jelly?

    321:
    Things that can detect some types of bad, or at least dubious, code do exist, though they aren't perfect (and can't be). However, this requires willingness to spend the time and money setting one up, learning to use it, and actually fix the bugs uncovered.

    That last one tends to be the hardest...

    Every now and then there's a news story where someone managed to get a hold of the software for a voting machine / breathalyser / whatever, run it through a linter and now they've got an extra bullet-point for their report. In extreme cases, that's their only bullet point.

    Having large numbers of lint warnings isn't a problem per se, but it does suggest that the authors or the program do not follow best practice, which presumptively includes using a linter...

    322:

    About 30% of tweetstorms would Go Away if English had a verb for "to understand someone without agreeing/sympathizing with them"

    An acquaintance often uses the phrase, "I hear what you are saying" in a way that implies "I understand your point, however I believe it is utterly wrong". Abbreviate it to IHWYAS , or maybe just HWYAS and it has the added advantage of sounding like "Why, ass ?"

    323:

    Holy fuck y'all nimbies.

    andwyrding Strong Feminine Noun a consent an agreement a conspiring conspiracy

    ánnes Strong Feminine Noun oneness unity agreement covenant solitude

    ánrædnes Strong Feminine Noun unanimity agreement constancy firmness diligence

    cwide Strong Masculine Noun ge~ speech saying word sentence phrase proverb argument proposal discourse homily opinion testament will enactment agreement decree decision judgment

    forespæc Strong Feminine Noun advocacy defense excuse agreement arrangement preamble preface prologue

    forespræc Strong Feminine Noun advocacy defense excuse agreement arrangement preamble preface prologue

    foreweard Strong Feminine Noun condition bargain agreement treaty assurance

    forewyrd Strong Feminine Noun agreement condition

    formál Strong Feminine Noun negotiation agreement treaty

    friþgewrit Neuter Noun - irregular ending peace agreement

    gecwedræden Strong Feminine Noun agreement conspiracy

    gecwedrædnes Strong Feminine Noun agreement covenant

    gecwide Strong Neuter Noun -u speech saying word sentence phrase proverb argument proposal discourse homily opinion testament will enactment agreement decree decision judgment

    And... 300+ more.

    http://www.oldenglishtranslator.co.uk/

    Put in "Agreement".

    ~

    You fuckers are limited.

    324:

    And, yes: If you are able to parse why Anglo-Saxon has 300+ word variants for different conditionals of diplomacy and the modern world doesn't, well.

    Let's just say: Corporations Hate Non-Legalese Speak.

    ~

    Holy Fuck.

    It's like Greg: I thought y'all were ++++ mucho better sophisticated and all that.

    325:

    (intent of the user errors)

    I misread that as internet of the user errors. . .

    326:

    Oh dear Someone forgot & opened the sluice-gate ... I note that the "message" ( You should excuse the descriptor ) both opens & closes with a generalised insult, viz: Holy fuck y'all nimbies. You fuckers are limited.

    328:

    Ooh, err, missus! The people marketing expensive last-minute 'solutions' may have done, while shafting the suckers.

    329:

    While it does you credit to keep a piece of kit running for that long, in particular from my perspective where the Queensland summer has a strong tendency to cook consumer network gear over time, I nonetheless humbly submit that MAC address filtering is only good for deterring the casual or uninformed. I recognise that WPA2 has only been around for 13 or so years and by your standards is probably still a flash in the pan, but I further suggest that an upgrade to something that supports it (not to mention a somewhat higher wireless bandwidth), could come in at essentially pocket change these days. While no-one knows your own circumstances better than you do, this would be a recommended path if you live in a populous area or near a main road.

    330:

    This is off-thread, but not off-topic. The UK government has been increasingly providing basic services online only which, as has been repeatedly pointed out, disadvantages those with no fixed address, no money, or no education. The worst I saw was to do with the "jobseeker's allowance", but the concerns are getting wider, and access to the Internet is now being used to control the franchise. This is firewalled, but I can't find it on their open Web page:

    "Young ‘locked out’ of ballot box while millions spent to boost vote of rich expats"

    "Now The Independent has learnt that the Association of Electoral Administrators (AEA) will protest to the Cabinet Office about the unfair hurdles faced by those without a permanent address in order to register to vote."

    http://edition.independent.co.uk/editions/uk.co.independent.issue.110317/data/7624006/index.html

    331:

    You fuckers are limited.

    No, we're just fluent in Elizabethan Newspeak.

    Nation States built on the post-Westphalian model don't need to worry about the endless shifting sands of tribal loyalty, unity, disunity, feuds, and reconciliations the way our ancestors did: when your claim to sovereignty is underpinned by a standing army with guns, you get to say "my way or the highway" and define the terms of discourse.

    (Who was it who said that a nation is a language with an army?)

    332:

    You've done a Pigeon :-) It does support WPA2, I use it, and more. The purpose I want MAC address filtering is NOT to protect myself from expert-level hacks, but (a) to block outside packets targetted at devices that shouldn't be receiving them and (b) to stop the SUPPLIED software on those devices snooping on me and worse. In particular, I want to block the ability of (say) a printer/Ereader/toaster from automatically 'upgrading' itself, with all of the hassles that implies (including no longer being usable without upgrading my other kit, spending money on my behalf, and so on).

    The whole point about this thread is that those of us who are paranoid are right to be - they ARE out to get us. Not in the black helicopter sense, but in the senses of maximising the marketing information they can get out of us, and the money that they can 'encourage' us to spend (with or without our intent). And that doesn't even start on what the government intends to do, insofar as 'intent' and 'government' belong in the same sentence :-(

    333:

    That sounds like Max Weinreich's quote: "a language is a dialect with an army and a navy".

    Apparently he said it first in Yiddish: אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט

    334:

    Who was it who said that a nation is a language with an army?

    Don't remember*, but now wondering what you call it when you have an army/police that speak a different language than the rest of the populace? Thinking of Tiananmen Square.

    *Okay, so I had to look it up. Yiddishist Max Weinreich said "A Language is a Dialect with an Army and a Navy" I knew it was familiar.

    335:

    All that's interesting*, but I read Charlie's tweet as wanting a single word. I realize my suggestion perhaps works better spoken, rather than written, tone of voice conveying the intent. I don't think acronyms really do it either.

    Now that I'm thinking of the Weinreichs, here's Uriel's The Seven Genders of Yiddish. Admittedly I haven't gotten around to reading it, and assume it has to do with grammar rather than recognition of gender definitions (or something like that—need to wake up some more).

    And that reminds me of Terms for Gender Diversity in Classical Jewish Texts

    *as in I'll have to bookmark that for later.

    336:

    Interesting ... so humanity's quest for the stars could be done in by components that don't handle numbers the same way. Given the length of such a mission, piddly rounding errors would only grow larger over time. Ooops - wrong star!

    More likely is that this rounding difference can be/probably is used to tweak P&Ls: round up for client-facing charges and round down for supplier costs. Considering the total number of transactions per day per corp, this could add up to a tidy sum in one person's pocket. The electronic equivalent of resting one's thumb on the scale, or owning the saloon floor sweeping franchise in California during its gold rush days.

    Wonder how this adds up on the stock exchanges. IBM designed at least one stock exchange and they're currently working with the Japanese to test blockchain. I assume IBM's in-house rounding method might be used even though this blockchain project is supposed to be built on Linux. Hope someone outside IBM and the Japanese stock exchange looks at the results very very closely, preferably side-by-side a hand-written ledger. Also ... if blockchain comes into effect, doesn't using non-standardized rounding risk run-away self-compounding errors? [I do not understand blockchain so my example scenario may be way off. And I'm assuming that there will be only one version of blockchain in worldwide use. Two blockchains using different rounding rules would allow earlier error detection.]

    337:

    Yes, all of that is true. The first aspect is the domain of numerical analysis, which is sadly neglected nowadays, often in favour of the bogus dogma that repeatable results are more reliable (or even accurate!)

    On the second, there is an apocryphal story about a bank that rounded down, and kept the excess - and an average of 0.5p per transaction rapidly adds up. So the person who programmed that code changed it to divert that into his account. When he was discovered, he was not prosecuted, because the bank would have to have admitted in court that it was committing the same offence he was claimed to have committed!

    The reason that most financial laws and procedures specify precise rounding rules is precisely because of such abuses. And, because they are written by different sets of bureaucrats over a long period, the rules are often bizarre and very different. Which is why the claim that IEEE 754 decimal helps is nonsense. For those who are interested, and have a clue, the following is a much-simplified description of the problem.

    Simple operations (addition, subtraction, comparison, SOMETIMES multiplication) can be done perfectly well using any arithmetic (including binary floating-point). The problems occur mainly with rounding to a lower precision (including after multiplication), division, remainder and exponentiation (think compound interest etc.) If the required rounding rule is one that IEEE 754 decimal provides, fine. If not, it has to be emulated. Also, many laws have rules that the excess (after rounding) is added to the next value or accumulated separately.

    But the killer is that ANY floating-point necessarily rounds when it runs out of precision, so your code logically has to deduce what the true value should have been from an imprecise result and the original operands, in order to round it correctly. That's a LOT trickier than most people realise, the relevant skills are not taught anywhere, and there are damn few people left with any experience whatsoever. There never were many.

    Something that is relevant here is that there is a lot of scope for 'apps' to be creative with their rounding, and it is devilishly hard either to prove fraudulent intent or for an ordinary joe to use the civil law in the UK, on the grounds that we don't have class actions as such.

    338:

    That looked interesting, so I glanced through it. "The Seven Genders of Yiddish" uses a very linguistic use of 'gender'. I am not a linguist, but am not convinced by his argument that the extra declensions should be treated as gender (rather than number). Modern English is increasingly using the plural "they" to mean a gender-neutral singular, as the result of political correctness, and some of his forms are similar to that. At this point, one can only quote Wilde: "The truth is rarely pure and never simple" :-)

    340:

    She's not going to convince anyone of anything while she's insulting us (and you should still be ignoring her if she annoys you; don't feed... etc.)

    "Greg Tingey, how about if I hold the football and you come running up and kick it?"

    341:

    The word "gender" derives from Latin "genus", kind, and in grammar need not refer to sex. There are a number of languages with more than three grammatical genders.

    This use of the word actually has priority over the "classification by sex" use, which the Oxford English Dictionary notes as "Originally extended from the grammatical use at sense 1", so Weinreich is entitled to use it as he does. But the modern sense is so pervasive nowadays that linguists have taken to calling the grammatical concept "noun classes" instead.

    342:

    Oh, yes, but that wasn't my point - which he actually makes in his paper, and is also implied by that Wikipedia link! It was whether the gender/number distinction makes sense in all contexts. When I was at school, I was taught that collective nouns are singular (in English) but, in common use, they are very often plural. And please note that "not convinced" means precisely that; I found his arguments plausible, but (as he implied) not the only ones that could be made.

    343:

    Thank you for that. I knew it wasn't about gender in a physical sense. This sort of thing is part of why I never got too far in learning Hebrew & Yiddish. Hebrew has some added confusion where you can have a 'male' noun with a 'female' plural form and vice versa, and the various conjugations. Started it all too late, High School French was okay, but it wasn't a language I wanted to learn. Anyhow, kinda sorry I brought it up. Then again, I think it's a useful way of thinking while doing some worldbuilding, if that makes sense. Still waking up, or trying to stay awake now, with extra migrainey goodness.

    344:

    ... a bank that rounded down, and kept the excess - and an average of 0.5p per transaction rapidly adds up. So the person who programmed that code changed it to divert that into his account.

    This featured in Superman III in 1983, it wouldn't surprise me if the idea was already well known by then. As for voyages to the stars, pretty much any control problem has random noise disturbances and measurement limitations which are much more significant than the numerical accuracy that calculations can run at.

    345:

    When I was at school, I was taught that collective nouns are singular (in English) but, in common use, they are very often plural.

    That's actually one of the areas in which American English differs from British English: the Americans still apply the collective-nouns-are-singular rule, whereas we have more or less given it up, at least in everyday usage. Weinreich seems to me to be being fairly careful in distinguishing number from gender—see his remarks on the difference between mass gender, which is a singular (cheese is a dairy product), and count genders, which are plural (Edam cheeses are typically cylindrical in shape, with a red wax coating).

    He is cheating in counting seven, though: what he really means is "most Yiddish dialects, including the standard one, have three genders, but I have studied an obscure dialect that has four, which do not map one-to-one on to the standard three." As far as I can see, there is no case where more than four would actually be in use.

    346:

    I've never tried either Hebrew or Yiddish, but I do have a working knowledge of German, which is a three-gender system, and sometime back in the distant past I acquired a 'A' in O-grade Latin, which is a three-gender system but has five noun declensions (which don't map one-to-one on to the genders).

    An interesting feature of German is that diminutive suffices are always neuter, so words like Fraulein and Mädchen are neuter, despite referring to (young) women.

    347:

    It was. I heard it in the 1970s.

    348:

    Re: 'Rounding incompatibilities and errors'

    Resulting in this summary: - new tech that knowingly runs old mistakes - miniaturization in tech is always the better alternative, but small pre-existing mistakes in how tech runs don't matter

    And we're supposed to trust these same folk with IoT that connects everything in our environments (runs our lives) and ensuring that it runs correctly? Don't think so.

    349:

    Why not to use binary floats for decimal money.

    Accountants expect software to give the same answers as their trusty HP12-C which used BCD for calculation.

    Many programming languages have a Currency or Decimal data type which is used to represent money or other values where binary float rounding errors are unacceptable. Most older CPU families have hardware support for BCD for accounting (IBM from classic mainframes through to modern POWER, Intel x86, Motorola M68000) but modern ones like ARM do not since software based decimal math is fast enough.

    350:

    I do have a working knowledge of German, which is a three-gender system

    German gender did confuse me a bit in the beginning, but I did manage to get by it. My native language is Finnish which has no grammatical gender and no gendered pronouns. German was my first foreign language, and it has helped a lot with the Germanic and Romanic languages.

    Nowadays I don't much think about the 'real' gender of objects in different languages. They vary across languages, too, so for me it's just easier to learn them by heart and not think about them. What's more, in Swedish, it wasn't clear to me at all that the different grammatical genders were originally "male" and "female" as it's not at all clear to the modern person who learns the language as a foreign one. Well, wasn't clear to me, at least.

    I'm also following the discussion about the gender-neutral third person pronouns with slight amusement, as we have only gender-neutral pronouns. Finnish-speaking people however often make mistakes with third person pronouns which natives very rarely or never make. A couple of months ago I talked (in English) with a Hungarian co-worker and he made that kind of mistake too, and this prompted a long discussion about pronouns and languages in general.

    Still, we do have two third-person singular pronouns: "hän" and "se". In formal language they'd be "he or she" and "it", respectively, but in spoken Finnish most people use just "se" outside of formal speech. "Hän" is more often used about pets than people, at least in my experience. There is of course some backlash about this, mostly that calling people "it" is derogatory, but this is a very minor issue.

    Lately, I've been brushing up on my Japanese, and I kind of like that it has very different grammatical structures than other languages I can speak.

    351:

    Sounds a little tangled but I think I see your intention. I'd suggest to turn off bridge only mode and disable proxy ARP. Use DHCP lease reservations in lieu of your MAC filtering. Then use the aforementioned good old iptables to implement your heart's desire. A router that can support OpenWRT or its ilk might be indicated?

    352:

    It is important to note that the solutions do NOT use floating-point, because the requirement is for fixed-point decimal, which is subtly different.

    353:

    Yebbut, it is 10-15 years since I did that, el cheapo routers (like mine) don't have the functionality, and just what proportion of the population has the skills, anyway? The point is that this is a basic requirement to prevent your 'things' from using the Internet behind your back, or being hacked in from it, but only OCD geeks can achieve the result (or those who employ OCD geeks).

    354:

    Yes, I know, but .....

    355:

    Someone needs to write an SF book series set in a high-tech future whose technology has effectively turned the world into a medieval fantasy.

    Read "Inhibitors" series by Alastair Reynolds. Especially "Chasm City". He pretty much nailed it.

    356:

    [Wish I could edit posts]

    "The Prefect" is even better than "Chasm City" in terms of "world turned into a medieval fantasy". And the sequel to "The Prefect" is coming out this year, according to Reynolds' website.

    357:

    Indeed - "salami slicing" was the technique name, IIRC...

    359:

    But the TCP/IP/UDP performance and recovery strategy weren't designed for such a complex transport system, which leads to some truly weird effects.

    Nah, that guy who knows said it's not mysterious at all. You can't both be right, I think.

    I have a whole lot of suspicions about what is going on, and there are three or four different problems that I'm chasing. Buffering, (de)prioritising, multipathing are all real issues. You can have some real fun just varying the packet sizes used by traceroute especially if you have multiple servers in different locations that you can play with. But it's not a hugely high priority so I spend little bits of time on it when I can.

    I am looking at google's shiny "one database for the whole planet" toy because they have solved some of my problems (why bother sending stuff over the internet myself when I can just insert it here and read it there? {I kid, I think}) but I expect there are other problems (for example, complete freedom from confidentiality and security against state actors, and possibly others. The question of who else has access can't be answered yet).

    360:

    a basic requirement to prevent your 'things' from using the Internet behind your back, or being hacked in from it

    That sums it up for me.

    There are an annoying number of things that switch from whining about loss of connection to refusing to work. Luckily most things can be bought without needing a connection, but it's increasingly hard to find things that can't be connected - remember the kettle that had a wifi-capable processor in it?

    I do wonder about those who have secure-ish wifi but there's an insecure signal in their dwelling from a neighbour. The potential for confusion would seem to be high and also the potential for fun if you honeypot those things via an open wifi that's heavily monitored.

    361:

    I was being mean, the 'fuckers' (as ever) is at the species level.

    Anyhow, an answer: "I apprehend the alt-right’s concerns, but they can still fucking die in a fire".

    verb (used with object) 1. to take into custody; arrest by legal warrant or authority: The police apprehended the burglars. 2. to grasp the meaning of; understand, especially intuitively; perceive. 3. to expect with anxiety, suspicion, or fear; anticipate: apprehending violence. verb (used without object) 4. to understand. 5. to be apprehensive, suspicious, or fearful; fear.

    So, no need for Anglo-Saxon: contains the prerequisite mix of Police (Authority disapproval), Understanding / perception and the under-tones of distaste / anxiety that said ideas stir in ethically engaged individuals.

    362:

    I do wonder about those who have secure-ish wifi but there's an insecure signal in their dwelling from a neighbour.

    I just did a quick scan and detected four wifi networks outside my home. Three of those had WPA2 and the unsecured one was a fon access point. ISPs have been delivering their hubs preconfigured with the password stickered on the outside of the box for years now so unsecured wifi networks are pretty rare.

    363:

    I got nine networks on my Kindle tablet but I live in a block of flats. We don't have a wireless connection in this flat unless I deliberately power up a router to provide one when I need it so that would be ten. Something with a better antenna would get me more networks from other sources around me and if I moved to the front room overlooking the street I'd get passing coach, bus and tram Wifi connections (a reverse drive-by) as well.

    364:

    ISPs have been delivering their hubs preconfigured with the password stickered on the outside

    IIRC ours gave us a plastic "swipe card" with the details and after some digging it turned out to be possible to change the wifi/WEP2 password but not the admin login for the modem. Allegedly because it's a cable modem only the ISP can get access to connect as admin (I gave the phone to my partner at that point so I'm not sure what their excuse was).

    From my phone at home I can only see secured networks, but using a USb-wifi adapter with a decent antenna on my laptop I can see a few unsecured APs and a whole bunch of secured-be-legislation ones (viz, their security can be cracked in seconds to days). Taking my laptop to a park on top of a hill gave me such a wide selection that I discovered a bug in the "show me the networks" program - after a hundred or so I wanted to stop the listing process but couldn't :)

    365:

    Also, these days you really need to be able to SMS the wifi password, but sadly that's not a service I've found. A lot of people (guests and new housemates) find typing in 16 random alphanumeric digits a hassle. But getting the password from laptop to phone turned out to be mildly annoying in many cases (apparently not everyone installs an FTP tool or even roots their phone).

    Oh, if you want to see real "wheels fall off with no internet" territory, use a permission-modifying tool to take permissions away from apps after you install them. Most won't install without silly permissions (contact managers that need access to the microphone etc), but you can change them after install. Most run just fine but every now and then one will get very, very cranky (a "free" photo editor that wouldn't run without frequent internet access, for example - hopefully just to download new ads but why try to find out).

    366:

    If Charlie is still looking for it (mentioned on twitter), Change in Human Social Behavior in Response to a Common Vaccine ( Flu Vaccine ) (2010) (May eventually spend some time poking at the rest of the site) That's from scholar.google.com; not sure it's a legal link. Small sample (36 adults, 35 completed first phase), and not seeing that it has been replicated, but interesting. (summary, legal for sure) RESULTS: Human social behavior does, indeed, change with exposure. Compared to the 48 hours pre- exposure, participants interacted with significantly more people, and in significantly larger groups, during the 48 hours immediately post-exposure. ... In the 2 days immediately after influenza immunization, study participants socially encountered almost twice as many other humans as they did in the 2 days before immunization. Participants were not consciously aware of any changes in their levels of sociability, nor could the changes in their social behavior be accounted for by differences in social patterns associated with particular days of the week. Human social behavior changed on the introduction of viral antigens.

    TIL of seed dispersal by ants, and elaiosomes. (Not a biologist, admittedly a poor excuse for ignorance). After the larvae have consumed the elaiosome, the ants take the seed to their waste disposal area, which is rich in nutrients from the ant frass and dead bodies, where the seeds germinate. And that some ant species in the area visited also overwinter, in their burrows, the caterpillars of some butterfly species. The caterpillars produce nectar. (Similar example)

    367:

    You might find by Kathleen McAuliffe interesting. Book-length discussion, with references.

    368:

    Second try…

    You might find This Is Your Brain On Parasites by Kathleen McAuliffe interesting. Book-length discussion, with references.

    369:

    Telstra has a free (as in beer) SMS gateway with a WS Interface that looks easy to use. I have been meaning to play with it but didn't have a use case. You could just send your message with SoapUI or a few lines of Perl, or whatever you have available.

    370:

    I shall have to look into that because it's something I would find handy from time to time. thanks.

    371:

    "But the TCP/IP/UDP performance and recovery strategy weren't designed for such a complex transport system, which leads to some truly weird effects.

    "Nah, that guy who knows said it's not mysterious at all. You can't both be right, I think.

    Actually my own experience suggests that unless a reader fully takes on-board the stuff about delivery and ordering of packets not being guaranteed then it's perfectly possible for both to be true. The mechanisms are simple and well described, in non-trivial cases (complex networks which involve multiple vendors, multiple routing possibilities, multiple bearers, and realistic traffic levels) the effects can indeed be surprising...

    372:

    Two things you didn't list that separate mediocre programmers from good ones is ability to handle pointers and concurrent operations. Which appears to assume that "good" programmers use something in the C family: at least that's the only place I've ever come across the term "pointers" as a programming concept. Ada's subprogram interface rules may come close since a procedure can declare that a parameter is mode "in", which makes it read-only within that procedure, and the parameter list for a function is always mode "in".

    373:

    (Who was it who said that a nation is a language with an army?) Which may explain things about the UK, since it means that Cornish, Doric, (UK) English, (Scots) Gaelic, Lallans, Scots and Welsh have to share an army?

    374:

    Which appears to assume that "good" programmers use something in the C family: at least that's the only place I've ever come across the term "pointers" as a programming concept.

    Then you missed out on Pascal at school? Or indirect addressing in assembler programming?

    Most of the very high level scripting languages (perl, python, ruby ...) also use references (type-safe pointers, AIUI) for building complex data structures like classes. It's kind of hard to implement a lot of algorithms without them, unless you insist on going fully functional (functional languages like ML or Haskell break my brain but appear to have a way around this).

    375:

    I think the closures of functional languages are kinds of references. While it's possible to code in a way that has no side effectts, a function that updates a variable whose scope only it and other calls to it have access to is manipulating a value through a reference, which is a side effect. I'd further suggest you can't actually implement curried functions without some concept of a reference (at least under the hood) - and those are really fundamental to Haskell.

    Of course everything from Perl to javascript use references to implement all the functional as well as OO features. A lambda function in Perl is a reference to an anonymous subroutine. All the callbacks we do in Node are pretty much the same thing. It's ubiquitous and central to understanding programming in all sorts of ways.

    On the other hand no-one outside of C really does pointer arithmetic, which could be what paws meant.

    376:
    Interesting ... so humanity's quest for the stars could be done in by components that don't handle numbers the same way. Given the length of such a mission, piddly rounding errors would only grow larger over time. Ooops - wrong star!

    You don't need to go to the stars for that... people have already died when early versions of the Patriot missile did navigational calculations in single-precision float. One of the calculations was reportedly based on the time since power-up, so during tests it worked fine, but in the field it was powered up for four days straight, at which point the resolution was down to one-third of a second...

    377:

    Potentially simple solution to that - mine is 2 UK phone numbers + a random-ish seed thats memorable for the family, which may offend security buffs but at 20+ digits its good enough for me.

    Since WPA2 supports upto 63 chars you could chain a good number of phone numbers together - you just have to be careful that all wifi clients you have support copy/paste.

    378:

    On the other hand no-one outside of C really does pointer arithmetic, which could be what paws meant.

    Which gives us a definition of the Gentleman Programmer:

    "Someone who understands pointer arithmetic, but chooses not to employ it"

    (Riffing on upthread comments on the difference between having the coding knowledge of how to use certain programming constructs, but the software engineering wisdom of when not to use them):

    379:

    That is the point. What most people miss is that specifying and/or understanding a set of rules (interfaces, behaviours or other) is NOT enough to understand the system as a whole. And the problems I am referring to at at best highly indirect consequences, and at worst emergent ones. When I am referring to real expertise, it is the understanding of the latter (and not just the rules) I am referring to. There is a very good reason that the best designs start with a semi-mathematical model that can be shown to be both consistent and correct; without that, almost incomprehensible effects are the norm. As I said at the start, I know of at least one serious but very rare failure mode that is believed by (almost?) all self-proclaimed TCP/IP experts to be flatly impossible ....

    380:

    One of the objectives of C++ was to reduce the need for programmers to use pointers in open code, except for pointer-like uses (e.g. graph structures). This is most clearly visible in modern Fortran, which has pointers, but which are rarely used except for such uses. While Haskell DOES have alternative methods, they aren't much easier to use or clearer than the old one of using arrays, selectors and indices (as was common in Fortran 66/77).

    381:

    "I kept your bottles fresh when you were a baby.

    On that long, wonderful blancmange-jelly-and-cream afternoon of your fourth birthday party, the first you truly remember, I held your ice cream.

    After you grazed your knee falling off your bike, I was the one with the cold flannels to ease the pain of this sobbing seven-year old.

    Every golden summer day when you were twelve, your friends would come to the house for lemonade and cookies. I made sure there was always ice.

    And on the day your scholarship was announced, I was the one who chilled the champagne.

    I am Freddy, your fridge. Please friend your fridge on Facebook.

    Facebook Fings. Because friends are not always people. "

    382:

    Not understanding pointers doesn't make you a mediocre programmer, because if you don't get pointers you haven't even reached an elementary stage of knowing how to code. Software engineers don't start becoming an asset instead of a liability until they have about ten years experience writing something approaching production-quality code (with today's low bar for production). Concurrency is a different matter, though: while there are ways to write reliable threaded code (and some languages make it easy), if you're trapped in C land and performance matters you're going to have to do grody things with mutexes. (Not being able to write reliable threaded code in Java -- or, worse, in Erlang, though I'm not sure that's really possible -- is an indicator of bigger problems.)

    Understanding time and space complexity is important even if you aren't seriously optimizing, because if you don't have any background for it, it's very easy to accidentally produce something that's absurdly slow even on small input simply by being naive as soon as you step outside idiomatic structures (or simply by using idiomatic structures without understanding them). For instance, Prolog's syntax for definite clause grammars looks like a great match for parsing, but because all predicate resolution in Prolog is a depth first search, a parser written naively as a DCG (even if organized to fail fast) will be unbearably slow compared to something uglier. Understanding time complexity is more important when you're seriously optimizing, but it's so easy to write something that will spend three weeks performing a simple operation on a megabyte of data that everybody should take the twenty minutes necessary to learn how to estimate it.

    Basically, the competence standards are so misaligned with what they should be to produce reasonably reliable code & services that anybody who complains about them comes off as a little bit crazy. This industry has been absolutely dominated by amateurs (some of them with degrees) for decades, and every few years it gets a fresh influx of them. But, we've had monumental shifts in quality in the past, and so there's no reason why we can't get to a much better state. We're slowed down by widespread assumptions that history and theory don't matter (meaning that problems that were definitively solved in the 60s persist because popular projects were written without the benefit of that knowledge), but the relatively small core of competent and seasoned developers (maybe 5-10%) that's already responsible for most of the good code also is the group that treats the lore seriously.

    383:

    Then you missed out on Pascal at school? Or indirect addressing in assembler programming? No-one ever used the term "pointer" to me in Pascal, and I've never been asked to do assembler beyond attempting (and passing) the ICL aptitude tests.

    384:

    This is most clearly visible in modern Fortran, which has pointers, but which are rarely used except for such uses.

    This tickled a memory from around 1975. There was a circuit analysis program that ran on IBM mainframes that was free and passed around colleges. I was curious about adding some features so I looked at the code. It was written in Fortran. OK, I can figure this out.

    Oh, no. The analysis was done by moving huge blocks of machine code embedded into the Fortran via data statements as long arrays of floating point numbers. These chunks of machine code were then stuffed into massive arrays to simulate the defined circuit. Then the Fortran executed the arrays. We assumed for performance reasons.

    As someone once commented, they should have executed the developer.

    385:

    If I recall correctly, the pointers were at least in Turbo Pascal the correct way of implementing dynamic data structures. It's been a while since I last used Pascal, though.

    386:

    Re: '... people have already died when early versions of the Patriot missile did navigational calculations in single-precision float.'

    Begs the question: Has anyone looked at what role (if any) programming has played in various 'friendly fire' deaths? Still quite a few wars going on, and it's getting more expensive to train a soldier. If I were a member of the victim's family I'd be furious if clumsy programming on 'our' side was the cause.

    387:

    "The chip maker reckons that by 2020 driverless cars will generate 4,000 GB, or 4 terabytes, of data a day that can be mined for information."

    http://www.bbc.co.uk/news/business-39253422

    388:

    "the 'competence' standards". Right. I have been a referee for someone getting BCS CEng under the old system, and I looked at the test for the new one. It was mostly about the USE of Microsoft-based (small) business systems, and there was very little (and even that at a very low level) on the sort of software engineering whose lack is being described in this thread, let alone about advanced but important topics like designing for correctness in a parallel/asynchronous environment. It may have improved, because I haven't looked at it since, but ....

    389:

    How much, if any, of that is actually required to be handled in a client-server manner, rather than peer-peer? I mean, I can see how/why it's an advantage to be told about a stationary or oncoming vehicle just round a blind bend (or over a blind crest yes). I can even see how it can be an advantage for traffic on the A1 south of Scotch Corner to be told about stationary traffic on the M6 Penrith to Carlisle but why does northbound traffic on the M40 heading for the M42 West then M5+M6 North and M74 need to know about a crash at the Catthorpe Interchange?

    390:

    Pointers go back at least to UCSD Pascal and IIRC the original p-code version had some sort of pointer arrangement — not sure if it went all the way down to bare metal memory addresses, though.

    391:

    Re: Flu vaccine and human sociability

    Some questions I'd like answered:

    Why did participants get flu shot: was it because they wanted to protect themselves/others because they wanted to engage in social behavior and didn't want to infect/harm others or only because they didn't want to get sick?

    Course schedule: recall that undergrad schedules sometimes had me running all over campus some days and sitting in my room or library (alone) other days.

    Control/placebo: because of ethical concerns, this wasn't part of this human study but imagine that it would be OK'd for a study done on rats/mice - results would be interesting.

    Why/when do people congregate: is it when they seek help because they're under the weather, or only after they start feeling better (immune system kicks up a notch which might also happen to release some feel-good chemicals).

    392:

    Dumb question time:

    What proportion of current tech engineering is the material/solid stuff one can see/touch and what proportion is programming? What are the failure rates of each? Materials science in engineering usually 'tests' to destruction. Is the same standard applied to the programming portion of the 'build'? (If possible, please use ordinary everyday examples - first, because it would make it easier to wrap my head around and, more importantly, it would provide a better sense of the scale of this issue in everyday life.)

    Nanotech - programmable variety - SF and science articles are saying we're heading in this direction. Who's writing the code for this, and what types of best practices are they embedding in it? This stuff is being discussed in terms of medicine, i.e., programming living tissue not just inanimate objects so consequences are that much greater.

    393:

    I don't think that you will find that is what was meant :-( An increasing proportion of Big Data is not used by the collector, but sold on, and used for purposes that are not directly related to the activity being logged. Marketing is only the most prominent one.

    394:

    Before that. I can't remember if the original Pascal had them, but they are certainly in the Revised Report of 1972.

    395:

    I also failed to point out that, unless the data is being gathered for a "vehicle usage tax", these data can (and should) be totally anonymised. I don't need to know who is just round that blind bend to take avoiding action, or who is stuck in that jam to re-route and either cross to the West coast on the A69 or take the A1 right up to Edinburgh.

    396:

    $CarDatareport (update) Am stuck in traffic .... @ $_Location.

    Yeah. Sorry, I really am not buying "Driverless Cars" The amazing optimism, without taking into account the cussedness & incompetence of people, never mind the equal incompetence of "Authority" seriously worries me with this one. Usually, I'm a techno-optimist, but not in this case

    397:

    $CarDatareport (update) Am stuck in traffic .... @ $_Location. Have you ever used google maps with the traffic reporting turned on? (green/yellow/red) I assume it works the same way in the UK as it does in the US.

    398:

    I mean, on a "smart" phone.

    399:

    Correction.

    ATM frames contain 48 payload bytes, not 64.

    It was a political choice. One group wanted 32, one group wanted 64. They compromised on 48.

    400:

    It's illegal to use a mobile phone whilst driving; fine over US$300 and get caught doing it twice and you lose your licence.

    401:

    I once inherited a piece of code written in Fortran. It had an single-element array A[1] declared as the first variable and not referenced anywhere else in the code. It turned out this was a leftover hack that allowed user-space code to access the operating system's data spaces by the use of negative offsets i.e. writing A[-23] allowed the code to change the OS.

    402:

    C, C++, and assembly language (all flavors) are the only programming languages I know of that permit the perversion known as "pointer arithmetic".

    Ada provides "access types", which have semantics essentially identical to pointers in every language except C and C++. As far as I know offhand, there is absolutely nothing in the Ada specification to require that access types be implemented with pointers (as opposed to array subscripts), but everyone implements them with pointers.

    Ada does not provide parameter-passing mechanisms as such. They provide parameter modes "in", "out", and "in out", and the compiler is responsible for choosing the parameter-passing mechanism. The key is that the compiler enforces "read only" semantics on "in" parameters, no matter how the parameter was passed. (A 1000-word structure is unlikely to be passed by value, even if it is declared "in".) For "out" parameters, a program that reads the parameter before it has been written is considered erroneous. I do not recall whether the compiler is expected to catch such things: it would be difficult to enforce. (The Ada specification warns about erroneous programs, telling the programmer in effect "If you do this, we will not be responsible for the resulting damage to your foot.")

    The restriction that Ada functions only had "in" parameters was lifted in Ada95. A bunch of C programmers infiltrated the Ada95 committee and butchered the language. Jean Ichbiah resigned from the Ada95 committee over another of their butcher-knife actions; his resignation letter was blistering. (I received a copy of the letter, and I agree with him. I was on the committee as a reviewer, but discovered I did not have the necessary time to do the work, and stepped down.)

    403:

    Wirth's original definition of PASCAL (ca. 1970) contained a CLASS declaration, that essentially declared an array of objects of a particular type that could be accessed with pointers. Each class so declared created a pool of objects that could be allocated and deallocated, with access to the objects via the pointer syntax.

    PASCAL2, the language everyone knows as PASCAL, dropped the CLASS declaration but kept the rest. Instead of allocating from class-specific pools, everyone allocated from the global heap.

    404:

    Depends on where you are and how you do it.

    If you have the phone mounted and running a GPS app, then it's the same as using a GPS (at least in Ontario). (And if you are holding a GPS in your hand, you are just as liable for the fine/demerits.)

    I use WhatsApp in my iPad when I'm going to a new location. Not enough data in my plan for live traffic updates, but it's very handy not having to fiddle with maps — voice prompts for upcoming actions ("in 200 metres, turn left onto Broadway") and if I need to see the map a quick glance at the screen is faster than reading a map.

    405:

    Modula-2 put pointers back in explicitly. I programmed in both languages and avoided pointers like the plague generally despite using assembly language on a regular basis where pointers are part of the daily grind (and a lot less obvious to use than in an abstracted HLL with compiler oversight and hand-holding).

    Pascal was originally intended as a theoretical teaching language, not to be used in anger to run real programs. Modula-2 was supposed to be Pascal's blue-collar cousin, ready for work.

    406:

    The syntax used for pointer arithmetic is not the issue, and never has been; the problems all arise from allowing the semantics. It makes no difference whether you have to write 'p += 5;' or 'p => p[6:]'. Algol 68, Ada etc. add a little more obfuscation, but that is all it is.

    407:

    CDC 170-state Cybers had really good FORTRAN compilers. Amazing optimization. Too much so, in fact: the compiler would note that 3.14159 was used in multiple places, and assign it a single memory location. And then a function could change the value, and wackiness would ensue.

    (Pascal was developed on the 170-state Cyber, incidentally.)

    408:

    Most project planners and programmers excluded Modula-2 well before the interview, for good reasons. Inter alia, Wirth made a total pig's ear of arrays in both Pascal and Modula-2 and then claimed that they were not useful, because they were rarely used.

    409:

    Yes - and the depressing / reassuring thing is that it's almost always a chain of human error, not a programming error. Typically, it's because the unit doing the shooting is convinced that they're shooting at the enemy, not friendly forces. Remember, there are humans in the loop during all lethal activities; calling a UAV a "killer robot" makes for good propaganda, but is fundamentally inaccurate.

    Sometimes, it's ill-discipline or incompetence (see the USS VINCENNES and Iran Air 655; or the two US Army helicopters shot down by USAF fighters over Northern Iraq in 1994) Sometimes, it's because of outdated information (a British Challenger 2 tank destroyed by another Challenger 2 in 2003) or conflicting boundaries (an SAS and an SBS patrol engaged one another during the Falklands War - IIRC the boundary between their patrol areas had been drawn in a thick pencil, gone through several generations of hand-copying, and errors had crept in).

    The Patriot incident mentioned above wasn't friendly-fire; an incoming SCUD wasn't intercepted, because the tracking and fire control radars had a subtly different idea of where the target was (where the antenna was pointing, and where the software though it was pointing, were different). Unfortunately, the SCUD then hit a barracks.

    http://www.iraqwatch.org/government/US/Pentagon/dodscud.htm

    Having said that, very few SCUD were actually intercepted by the Patriot system - most apparently broke up because of inherent design flaws, rather than because they were shot down. So even if it had been working, it might not have saved those lives.

    410:

    We had a near-miss in 1990, but fortunately the MoD accepted that our radar signal processor would written in C and assembler. Unfortunately for the radar data processor team, the MoD was still insisting on Ada. They had to switch from an Ericsson-driven flavour of PASCAL called EriPascal [pp.85], to Ada, thus rendering themselves somewhat CV-limited for their future careers...

    Anyway, the joke went something like this:

    • BASIC assumes that the programmer is a beginner
    • PASCAL assumes that the programmer is an undergraduate
    • C assumes that the programmer is an adult
    • ADA assumes that the programmer is a hardened criminal.
    411:

    This quote says it all:

    'When questioned by board investigators as to who was responsible for tracking the helicopters, Tracy said, "I cannot tell you that. I honestly don't know."[28] When Wang was asked the same question by the investigators, he replied, "No one is responsible."'

    Now expand to IoT ...

    412:

    Folks in the US northeast: consider recharging your communication devices' batteries (like right now!) becuz massive storm (Stella) is on its way. See Bombogenesis below.

    https://weather.com/storms/winter/video/winter-storm-stella-will-cover-the-northeast

    For UK folks, sailors: The barometric drop predicted for Stella looks almost as fast/hard as Fastnet (1979).

    413:

    Thanks for that; it's been niggling at me ever since Charlie mentioned it trying to remember where Pascal had pointers, and wondering whether they really existed in actual Pascal Pascal, or whether it's just that they existed in one or more of the more popular things which call themselves "Pascal" but have had so many extra features bolted on to make them usable for real problems that the name is no longer all that meaningful. I'd completely forgotten that it allowed you to get memory off the heap, probably because once you start needing to do that Pascal fails for the problem in so many other ways that you've long moved on to something else.

    There was also a mechanism for passing parameters to functions in a manner that resembled, in terms of how you did it, passing a pointer to the parameter in C, only with the compiler wrapping up all references to that parameter so you didn't have to treat it any differently, except in its declaration in the parameter list. The Pascal "compiler" I used for the only program I've ever written in Pascal (a university exercise for which Pascal was mandated) worked by translating Pascal into C and then calling the C compiler, so that's probably what it was doing quite literally as well as in terms of how it appeared. That was all I could think of at first, and doubted whether it counted since not only was such a parameter wrapped to appear non-pointer-ish, there was AFAICR no way to unwrap it and make use of the underlying pointeryness.

    414:

    IIRC, BCPL's was a bit trickier, because your pointer arithmetic was always in words, not in the size of the objects pointed to.

    In that respect it matched assemblers a bit more closely.

    415:

    Pascal was originally intended as a theoretical teaching language, not to be used in anger to run real programs.

    Back in the 80s, I was writing telecommunications software in Pascal. A goodly chunk of the Canadian telephone system ran on software written in Pascal.

    416:

    Another problem I've found is that not all engineering fields put strong emphasis on teaching programming.

    I'll use myself as an example. I went to a top-5 university and majored in Aerospace Engineering. The programming required during my Undergraduate involved nothing much more rigorous than functional programming in Java and MATLAB. None of those classes even introduced the concept of pointers. The professors viewed teaching more in-depth programming as a distraction from teaching the main material. A friend got his Masters in theoretical mathematics with no coding knowledge.

    The response was one of the following

  • Hire someone who had a CS degree, but did not understand the physical phenomenon they were modeling

  • Let the main researcher write the program, and damn all the headaches this would create for someone who has to read the software down the line. In this case, the person coding the software has no idea of best practices unless they are self-taught in the area.

  • 417:

    I did say "originally". People do the damnedst things -- I've seen a BASIC interpreter implemented as an Excel spreadsheet -- and Borland Turbo Pascal has a long and storied history in production code.

    I even wrote a robotics embedded control system in Pascal (well, the user interface stuff, logging etc., the real meat was done in assembler to speed up control loops). The HP64000 development system had a bunch of compilers for assorted microcontrollers and writing a machine path recorder/editor in Pascal was a lot easier and cleaner than doing it in assembler. The compiler had its limitations though and I usually went through the assembler it generated to clean stuff up by hand afterwards.

    I am of the "pointers are evil" camp myself -- assuming I fight my way through the migraines I can grasp the concept of arrays of pointers pointing to arrays of pointers for a while but when the strong drugs wear off that awareness goes away, like the snows of yesteryear never to return.

    418:

    Some questions I'd like answered:

    Why did participants get flu shot:... Exactly; these are the sorts of possibly-bad-study questions that should be in everyone's repertoire. The similar literature for Toxoplasmosis (recent paper picked at random) is much more extensive and seems to be a lot more solid but there are plenty of skeptics

    A few decades ago I had a book (perhaps a textbook) that was nothing but a collection of bad studies (mostly psychology IIRC); you were supposed to quickly sniff out why they were bad, or might be. It was unusually fun for a coursework book. Does anyone know the book? Have tried to find it since but have failed. Would be decades old.

    419:

    I experimented with "fiddling with maps" shortly after learning to drive (on a dead straight road, with wide verges, out in the country, with nothing else on it). I quickly found that there was no alternative but to stop. Trying to do it while moving meant I very quickly ended up on the verge, even if I first slowed down so much I might as well have stopped anyway. (That was with a paper map, of course; a screen would have been even worse as X is harder to read on a screen than on paper for any X.)

    The problem was the mode-switching time for my visual system; watching for anomalous changes in a broad and distant dynamic visual field is very different from analysing the relationships between elements in intricate static data - I suppose the difference is to do with the latter being more of a learned skill, whereas the former is a variant on a very basic and instinctive survival function that the brain is inherently good at. It also takes a noticeable amount of time for my accommodation muscles to switch focus from infinity to a couple of feet and back.

    Since I have no reason to believe that my own brain is hugely different from anyone else's when it comes to this sort of thing (although I do believe that most people either fail to realise how long the mode-switching time is because they lack the habit of "sitting outside their brain" and analysing its operations, or else just wish its significance away with some excuse like "but everyone does it"), I consider the trend of allowing people to use any kind of device with a screen while driving is misguided, and drawing a distinction between having it in some kind of clamp or not is splitting pointless hairs. AIUI there was a law (in the UK at least) forbidding the installation of a TV set in a car where the driver could see it, back when TV sets were the only things that had screens and the size of them would have made such an installation impractical anyway. I don't know what the current status of this is, but if the concern is with devices that encourage you to take your attention off the road - without any "however briefly" qualification, since the generally-ignored mode-switching time means it's never "brief" - then logically it should be extended, to cover any kind of device with a screen, whether it's a TV, a phone, a satnav, that barmy GUI menu thing BMWs have, or anything else.

    (It is also one reason I deplore automated speed limit enforcement, since this encourages people to pay far too much attention to a number on a display inside the car - a number which is in any case very poorly correlated with actual danger. If there is a need to prevent people driving at dangerous speeds, it should be met by use of actual police patrols who can take account of all the numerous factors that determine a safe maximum speed for any given set of conditions, not by oversimplifying the problem to the point that the baby gets thrown out with the bathwater just so a dumb machine can make a binary decision.)

    My method of navigation by road is to study the map before commencing the journey, to get a feel for the geography I'm dealing with, and refresh my knowledge during pee breaks, then correlating that knowledge with my general spatial awareness ("sense of direction"). I find this works well enough that I can do better than a passenger making point-to-point decisions from reading a map in real time, especially in bits where the scale of the map is too small to clearly show the necessary detail.

    420:

    C versus Ada: unfortunately, the programmers who used/use them tend to be exactly the other way round :-( The best description of C I heard was "a hacker's wet dream".

    To Bellinghman: you recall correctly, but it was only trickier if you needed to access objects that were not word-aligned. It didn't have the C-specific array/pointer lunacies, which are damn near indescribable and cause a lot of problems (even in C++).

    421:

    I see what you're trying to do, but I don't think it works. The difficulty is that simply expressing understanding tends to imply at least some degree of sympathy. I'd expect "apprehend" to be parsed simply as a synonym for "comprehend" without any implication of "reprehend". I consider that to convey the concept in a usefully robust way it is necessary to use some explicit construction like "understand but don't agree" - especially on twitter, where despite their value in mitigating the difficulties imposed by that bloody stupid 140-character limit, subtleties of language seem to go over people's heads even more than they do in internet postings in general.

    422:

    Understanding time and space complexity is important even if you aren't seriously optimizing,... it's so easy to write something that will spend three weeks performing a simple operation

    Shhhh. Some of us make good money going in to fix code like that. Remember, do it the easy way then pay a real code wizzard a few thousand dollars to come in and optimise it. Us code wizards know mysterious incantations and arcane fomulae that mere code warriors can never understand.

    I kid you not. I've gone in and replaced linear searches with binary ones to cut a data import from weeks to 15 minutes. The hard part of that job was looking busy for the other 9 days of my contract.

    423:

    My method of navigation by road is to study the map before commencing the journey, to get a feel for the geography I'm dealing with...

    I agree with both that and Robert Prior's prior #405. What we do before setting out is a fair amount of MAPEX(*) using Google Maps/Earth, Google Street View and possibly other map aids like Bing. That sets in mind what we should be seeing and doing on the way. But on the way, a dash-mounted GPS used with restraint and discipline as a talking heads-up display to provide cues as to what's coming up is really useful.

    (*)http://www.globalsecurity.org/military/ops/mapex.htm

    424:

    Been there - done that - and, amusingly, the converse :-)

    425:

    I find it handy, especially in areas with missing street signs and/or lots of semi-trailers (which block the view of street signs). Even more so in areas where there is no safe and/or legal place to stop — I've been in situations where not being able to change lanes has forced me into the wrong road and turning around has taken over an hour. (But then, Toronto traffic is the 6th worst in North America, 47th in the world.) And if one if forced to divert (say by the police closing a road) it updates the route and navigates you to your destination.

    The thing I like about WhatsApp is that rather than show the entire route, it just shows where I am and the next intersection or so, oriented the same way the car is pointing. Which means that glancing at it takes the same time as doing an instrument check. It's not quite a HUD, but pretty close.

    426:

    if the concern is with devices that encourage you to take your attention off the road - without any "however briefly" qualification, since the generally-ignored mode-switching time means it's never "brief" - then logically it should be extended, to cover any kind of device with a screen, whether it's a TV, a phone, a satnav, that barmy GUI menu thing BMWs have, or anything else.

    I disagree conditionally.

    I see where you're coming from, but I will contend that a properly set-up satnav, programmed with a destination before the vehicle is set in motion, might be a distraction, but is less of a distraction than a disoriented driver who has lost track of their directions and may erratically try to take exits at the last moment in hope of getting back on course.

    One problem is that satnavs tend to display too much information for some of us — I end up using mine on long journeys instead of the speedometer (it's more accurate) and end up scanning it instead of my regular instruments. (Which may be problematic if I ever get an engine check light or low fuel warning, but hey, those are low probability warnings.) The ideal would be some sort of permanent fixed head-up display to superimpose the basics (speed, direction, instructions for the road ahead) on the windscreen in front of the driver's forward view. But alas, those seem to be exotic luxuries for cars — something only really available on aircraft right now.

    One thing that does help the middle-aged driver: properly adjusted varifocal (US: progressive) lenses, so that your eyes are accommodated to the instrument console when you glance down, and are focussed correctly on the horizon when you glance up. Getting it right is tricky, but part of the cognitive lag of switching between instruments and windscreen view is down to your eyeballs' physically adjusting to a different depth of focus; varifocals have already done this for you, and make for a more relaxing driving experience.

    427:

    Pricey but covers the do's and don'ts quite well:

    Essentials of Behavioral Research: Methods and Data Analysis 3rd Edition by Robert Rosenthal, Ralph Rosnow

    428:

    Um, hem,... I used pointers in one of the languages I first worked in... PL/1.

    mark

    429:

    Re. "Smart" devices that won't run without internet access, thereby introducing great gaping security holes in one's home network.

    I foresee a not-too-distant day when some clever hacker or group of hackers creates a dongle that fakes out the smart devices into thinking they're connected to the internet so that they'll operate just fine, but without actually opening your home to casual home invasions. I expect this would download virus signatures (excuse... "phone home codes") just like antimalware software, and that operates on a subscription fee.

    Cue the inevitable arms race between the IOT people and the valiant souls who created the dongle to protect us from their machinations. Hmmm... maybe there's a story in there somewhere...

    430:

    Sean wrote: You really may want to try researching before scoffing. (To actually respond to your attempt at sarcasm:

    Hey, that was neither. It was humorous/minor egoboo/bragging, for use in a very small subset of folks who'd appreciate it.

    mark

    431:

    I heard about the guy who'd done it, and that the company did not publicize it, but they got him, legally. And that was in a COBOL class in 1978/80, in Philly.

    mark

    432:

    And one of the many horrible things about the original 68000 Mac with the built-in screen was that the official language for programming it was Pascal with a truckload of bits bolted on the side so you could still do useful things with it. Underneath it was all pointers to pointers, which they called "handles", because of the crappy half-arsed memory management system the thing had, the second level of pointers being roughly equivalent to page table entries on proper machines.

    The reputation it gained for "stability", which Apple have been trading on ever since, was actually totally spurious; in reality it was much easier to lock it up hard than an MS-DOS box. It got that reputation simply because there was hardly any software for it, and what there was had been vetted inside out by Apple. I suspect the use of Pascal was partly to make it easier for them to do that (although I don't think that was their official excuse). Of course it also helped them that the thing was so shaky that if a program was going to crash at all it didn't take much doing to find the conditions that would trigger it.

    433:

    so that your eyes are accommodated to the instrument console when you glance down, and are focussed correctly on the horizon when you glance up.

    IMO, the dangerous word there is "down." That takes the road entirely out of your field of vision, or pretty much so. If the GPS display is mounted on top of the dash(*) to the side of the steering wheel, glancing sideways at it leaves the road in the FOV, which is desirable. Not quite a true HUD, but closer to it than one mounted lower.

    Like this, and variations: http://13db.com/wp-content/uploads/2015/10/Choose-the-Correct-GPS-for-Your-Car.jpg

    (*) I'm not sure "dash" is the right UKian terminology.

    434:

    How about £20? That's for a one of many such device/app combinations that use your phone as their display, but IIRC it's possible to hack one together with a CD case and window tint, at the lower end, and get dedicated, more integrated, projectors for hundreds of pounds/dollars/etc at the other.

    435:

    The 68k chips had a problem in that a lot of the support chips the CPU needed to work as a system were late to market and/or buggy on first release, including the all-important 68851 memory management unit. The usual fix was to patch a bunch of TTL onto the CPU bus to make things like memory accesses work and interface to Other People's support chips (for values of Other People == Intel).

    Without the MC68851 there was no easy-to-implement protection for memory accesses, no systematic DMA implementation which was a bear given the different possible data word sizes (8bits, 16 bits and 32 bits) and the possibility/requirement to write into misaligned memory. It also didn't help that the memory/I-O bus was asynchronous requiring a data-available strobe to qualify a read or write rather than simply being clocked as in Other People's CPUs. This played havoc with atomic writes for handles, flags and stack accesses on interrupts.

    Speaking of interrupts there was no Motorola interrupt controller available for a long time after the CPU hit the market and the venerable Intel 8259 was usually hacked into systems with some serious TTL bludgeoning to give enough interrupts and some level of control over how they were handled.

    Intel's 16-bit CPU offering, the 8086 was deliberately bus-compatible with all the old well-tested 8080-series bus devices. Moreover it was also compatible at the register level with the 8080 which made code conversion piss-easy although purists decried the 64k memory addressing structures that compatibility required.

    436:

    I probably agree with your disagreement :) On the occasions when I do lose my bearings, one of the most awkward aspects of recovering is the need to do so in a safe manner, which is made harder by the inevitable annoyance at things having gone wrong. I can appreciate that a lot of people might well just start to flap and let such considerations go by the board.

    I consider myself fortunate to have a friend who is both thoroughly aware of the importance of those matters which are commonly lumped under the misnomer of "advanced driving", and able to communicate it. The human brain naturally tries to apply to driving a de-facto physics model which gives the right answers for unassisted locomotion, but not when dealing with the speed, mass and response characteristics of a vehicle. The behavioural modifications needed to work around this need to be learned, which requires both effort and guidance, but the legally-mandated level of driving instruction does not cover such matters except in the most rudimentary way; people are left to pick it up for themselves, but don't generally see the need.

    Totally agree about varifocals; I find it strongly noticeable how the purely physical aspect of the mode-switching delay has got worse as I get older and my accommodation ability gets worse.

    437:

    PASCAL had pointers from the beginning.

    I have seen it reported that, in 1982, Wirth stated explicitly that PASCAL had been designed for general-purpose programming. It is true that it was used heavily for teaching in the 1970s, it being far better for that purpose than anything else out there. I would submit that Wirth probably knew and knows more about his design goals for the language than anyone else.

    In private conversation with a fellow student, ca. 1977 or so, I was told that part of the intent for PASCAL was that it be possible to compile and link in one pass, this being Wirth's horrified reaction to the incredibly bloated OS/360 linker that could link anything if allowed enough passes.

    438:

    Uh, not quite.

    PASCAL was originally developed on the CDC 6400 at ETH-Zurich, long before the Cyber 170s came along.

    439:

    What you're describing isn't use of a sat nav, it's some sort of electronic map display. You're not supposed to be "analysing the relationships between elements in intricate static data" when you look at a sat nav, you're supposed to listen to the instruction "turn left in 100 m", supplemented by glancing at the picture of exactly how your specific turn works ("oh, I need to keep right as I come off this roundabout I'm queueing for, because the road immediately splits"). I have seen people using sat navs in this electronic map display mode - my father for one - and I agree that this makes it just a distracting extra input.

    I find that sat navs are most valuable in urban driving. Knowing that I need to take junction 16 off the M25 signposted for Birmingham isn't the challenging part. Knowing which of the three near-parallel streets heading roughly towards my destination is the one I should take, which dumps me into a one way system and which is a residential dead end is much more useful to me.

    440:

    Eh? The dash is down, insofar as it's not in front of the road — you have to look away from the traffic to see your speedo, etc!

    441:

    Pigeon wrote And one of the many horrible things about the original 68000 Mac with the built-in screen was that the official language for programming it was Pascal with a truckload of bits bolted on the side so you could still do useful things with it.

    That was one of Apple's many good decisions. The 'bits bolted on the side' such as units were already well established in UCSD Pascal and Turbo Pascal. Apple did add a special notation for defining Pascal procedures as M68000 TRAP instructions (sys calls) without glue code, but to the app developers these were defined and behaved just like any other procedure.

    Pascal generated smaller code than C at the time, because it had callee-removed stack arguments instead of caller-removed. This made a big enough difference on the memory limited PCs of the time that Microsoft also used Pascal calling conventions for Windows until this century, which is why 'pascal' is still a reserved word in Visual Studio C/C++.

    This was also before ANSI C prototypes came into regular use, so Pascal programs were often more reliable than C programs because the function/procedure arguments had been checked by the compiler. (For modern day programmers, it was like going from JavaScript to TypeScript.)

    Underneath it was all pointers to pointers, which they called "handles", because of the crappy half-arsed memory management system the thing had

    And you can show us an implementation of a complete single tasking OS with GUI in 192K (kilobytes, not megabytes or gigabytes) that's better?

    Handles were a pain. But there wasn't memory to spare for a modern style heap where programmers can afford 20% or more unused space. The double indirection table allows the OS to compact the memory (128K remember!) to make space for new allocations.

    Again, the Xerox people had chosen a similar approach for object references in their Smalltalk system. At the time, it was a reasonable approach to managing objects in what by todays standards are horrendously tiny systems.

    "Good" design changes over the years as the technology changes underneath.

    442:

    Heads-up displays. Totally freaky until you get used to them, and then totally addictive.

    443:

    The 6400 was a 170-state before the name stuck. Same instruction set, same architecture. Just more of the units were hand-wired by Seymour.

    I use the term to differentiate it from the 180-state Cybers, and then the 200 series which kinda flopped even more than the 180-states.

    444:

    Using a phone as a satnav in a modern car is not like talking or texting on a hand held phone. In my car the phone is clipped to the dashboard and connected by bluetooth to the car electronics. Using my usual satnav app I don't need to look at the speedometer since the speed is displayed on the phone. There's also a display of the speed limit and an audible warning. With the traffic option switched on the app will recalculate the route if traffic delays are too long. This is a free app. Google maps does the same but there's no speed display and it uses much more mobile data. If somebody calls me it only takes a push of a button on the steering wheel to answer using the car's microphone or speaker. I'm not talking luxury cars here. I drive a Skoda Yeti.

    445:

    Using a phone as a satnav ... This is a free app

    Which one? And does it have a bicycle option?

    I have tried google maps and a borrowed satnav, and both of them were pretty dubious for navigating a bicycle. On a bike I have the advantage that I can almost always stop and look round, but the disadvantage that doing to is expensive - unlike homo ped plumbium the pedal powered version has to work to get back up to speed. So something that can navigate me down bike paths without constant stopping would be great.

    The weather here today is "Cloudy. Very high (near 100%) chance of showers. A thunderstorm likely, possibly severe. Heavy falls possible." so a waterproof phone rather than one that's merely water resistant is probably a good idea :) Note that "showers" and "torrential rain" are apparently the same thing as far as the weather forecasters are concerned. Although at least we have 100% chance of rain, I've seen it actually raining with a 50% chance of rain before.

    446:

    Whereas I refuse to have anything at all to do with Shat-Navs under any circumstances, having seen drivers using them & me being a very scared passenger. IMHO they are inherently dangerous & distracting, especially if their "instructions" are not absolutely up-to-the-minute .... I have a strong suspicion that it's a very personal thing, to do with an individual's reaction to external stimuli

    447:

    Yes, I very nearly switched to PL/1 from FORTRAN_IV, but didn't. I've come across "Pointers" in an educational environment, but have never had to use them in anger, but then I'm practically stone-age as far as programming/Code-writing goes.

    Meanwhile .. "Pascal" - shudder.

    448:

    And you can show us an implementation of a complete single tasking OS with GUI in 192K (kilobytes, not megabytes or gigabytes) that's better?

    TOS or Tramiel Operating System although it wasn't single-tasking, it could round-robin. The Atari ST came with 256kb of RAM though unlike the much more expensive Mac, and could be solder-modded easily to take it up to 512kb via piggyback DIL RAM chips. I had one of the first ones to come to the UK with the mono monitor which was higher resolution than the Little Beige Toaster. There was also a colour monitor option and later a composite/RF TV feed version (the -M models).

    449:

    Which one? And does it have a bicycle option? Try the google maps app for either iPhone or android. (Pretty sure it has a bike mode; it's failing atm perhaps because I'm inside in a dead spot now.) I use it sometimes in the car, in voice mode, burying the phone where I won't be tempted too much to look at it. If you decide not to obey the bossy voice, it will recalculate your new optimal route, and also the route will be recalculated if there are new circumstances like accidents.

    450:

    https://here.com/en/products-services/consumer-app/here-wego-app

    (I said "WhatsApp" before. I meant this one. Brain fart, sorry.)

    Navigating an urban environment can be complex when you're trying to get from Point A to Point B. How do you choose the best route to take, how do you best balance convenience, time and cost? Consumers need an on-the-go resource that connects all modes of transportation to get to their destination on time and without hassle.

    HERE has created a truly comprehensive mobility app that can help provide complete transportation options and navigation guidance. Consumers get instant access to offline maps for more than 110 countries, public transit information for more than 1,300 cities and 3D indoor maps for thousands of shopping malls, airports and sports venues. And, HERE WeGo is just as useful on a desktop.

    I've only used it in Southern Ontario, and only for driving, so I have no idea how good it is elsewhere and for other transport modes. But as it's free, you can check it out and see if it works for you.

    451:

    Navmii. It has a walking option but I wouldn't use a satnav on a bike.

    452:

    I should note that you don't have to be online to use this (although you won't get traffic updates then). I've never used it in online mode, as I don't have enough data for something like that. It handles recalculating routes quite nicely on its own. It claims to be able to plot bicycle routes, but I haven't tried so I have no idea how good they are.

    453:

    Interesting, I will try that. Wow, it really does want the full set of permissions. But it does claim to have Sydney public transport, so that could be fun (google maps does not integrate PT and bike riding, which is very frustrating)

    Satnav on a bike replies on having both a turn-by-turn display and a nice loud speaker for the voice thing. The latter is why it's also annoying if they are too chatty.

    It's also useful if you can change the pitch and speed of the voice, but there are apps that usually do that for me. A high pitched but not as fast voice is usually good (I habitually watch youtube at 2x speed, but 2x speed navigation instructions or audiobooks are too hard to process in a noisy environment. However, lifting the voice above the normal frequency range helps with both volume and penetration).

    I struggle to understand people like Bill who can read "I want an X, I have tried A and B but didn't like them, what else is there" and reply with "you should use A".

    454:

    I only recall that it needed permission to use location (which is obvious). I think it asked for push notifications, which I denied, and don't recall anything else. (Didn't need microphone, camera, etc.)

    455:

    Hmm, "Here Wego" is a bit scary. It does better than google at finding actual bike routes, except that it also includes as options routes along things like Parramatta Road and Rookwood Road (two/three lanes each way with centre divider) as preferred bicycle routes (those were options one and two, but option three was along the actual official bike route). And there's no easy way to see which of the routes is which without clicking on each one.

    Sadly, even when I go into settings and say "public transport, bike, walk" as my travel options it can't do mixed mode journeys. There's no "ride to the train station", in other words. But I will give it a go and see what it's like for turn-by-turn nav on my way home.

    456:

    FWIW, today if the thunderstorm and windy rain nonsense is still about when I ride home I will ride ~4km to a train station, train ~15km round the loop to home, then ride the final 800m. It beats riding 15km in the rain.

    I test nav apps on my commute route because it's fairly challenging but there are really obvious mixed mode options (with the option of trading a train change for a little extra riding) as well as two really obvious but awful main road routes (neither have bike lanes let alone separated bike paths) and a really brilliant set of bike paths that local government have put quite a lot of work into.

    An app that gives me all five route options and ranks them usefully is my dream. So far the Herewego thing is just like all the rest - it finds the three bike routes but not the PT options. But if it is better while riding I'll try to buy out of the advertising.

    If it can do that, I'll be more likely to bother to even look at it when I'm going somewhere unfamiliar. Google maps usually does ok for that (except mixed mode, which almost no-one does), but google's idea of voice navigation makes giving a map book to a five year old seem like a stroke of genius. Specifically when "turn left here" actually means "don't try to cross four lanes of traffic and climb the fence in the middle of the road you idiot, go back 50 metres, turn right, carry your bike down the stairs and under the pedestrian overpass at the train station"... that's not useful.

    457:

    TOS or Tramiel Operating System although it wasn't single-tasking, it could round-robin. The Atari ST came with 256kb of RAM though unlike the much more expensive Mac

    I had one of the later 1040 models with a Megabyte of RAM, the mono monitor and a hard drive around 1990. I did my first GUI programming on that, learning GEM with the Mark Williams C compiler which had a microEMACS editor and various UNIX command line tools I presume came from Coherent, and a nice debugger csd. I also had MINIX on a partition on the hard drive (12 megabytes!), a little UNIX-like OS with C compiler that predated Linux.

    458:

    But if it is better while riding I'll try to buy out of the advertising.

    Advertising? I haven't seen any advertising.

    Mind you, I'm only using it when travelling new places, it doesn't have network connectivity, and I'm usually not looking at the screen (just listening to the voice, and that only when I think I'm nearing a turn), so I may have missed it.

    459:

    Nojay wrote: The Atari ST came with 256kb of RAM

    I thought they shipped from the start with 512K? Either way, when your minimum system configuration has at least twice as much RAM, it's easy to do better.

    And someone will surely bring up the Amiga. That system demonstrated what could happen if the OS didn't use handles. All these early systems had one address space shared by both kernel and program. On the Mac or Windows for that matter, the handles were in a known location and the OS could always clean them up when a program exited. On the Amiga, which used traditional fixed memory allocation with pointers, a program could exit without freeing chunks of memory. The OS had no way to distinguish between blocks allocated by the program and blocks allocated for its own use, so couldn't free them. And because they weren't handles they couldn't be moved either, so memory could become fragmented.

    (Note lots of "could" in that explanation: it wasn't common. The Amiga designers decided that having to occasionally reboot just to recover memory was less of a nuisance than making programmers work with handles.)

    There's plenty to criticise about the classic Mac. But IMNSHO putting Pascal and handles on the list shows a superficial understanding of the old micro systems and the constraints imposed. Don't judge past technology by what is normal today.

    460:

    I struggle to understand people like Bill who can read "I want an X, I have tried A and B but didn't like them, what else is there" and reply with "you should use A". I apologize for that. Not sure what happened, but will take more care in the future. Your #456 and #457 are educationally horrifying; thanks. Wondering now how bad it is for my area. (Car mode is fine. I've requested a few corrections though over the years.)

    461:

    Thanks. It seems to be common online, I'm not sure why.

    Navigation apps frustrate me, obviously, probably because I'm outside the target market. Like people with mobility problems - there's no "wheelchair" option even though that is desperately needed (and fiendishly complex, because you need a 3D high-res map to spot stupidity like excessively steep pram ramps).

    I should look at pedestrian stuff because that's surely obviously multimodal. I know "use transit" generally includes walking time/distance, but I don't know whether you have the option of using taxis as well rather than instead of. Not that that matters where I work, taxis won't pick up from here (not because it's dangerous or impossible to navigate, they will accept the request but 90% of the time no taxi turns up). GoCatch, OTOH, is much more reliable (out of two uses, admittedly).

    Navigation voices are a key thing, the google "now I'm whispering" thing is weird, especially when I'm listening to an audiobook while walking. Every third or forth instruction is "BOOK BOOK cross here BOOK BOOK BOOK go back" (no chickens were harmed in the making of this message).

    462:

    The difficulty is that simply expressing understanding tends to imply at least some degree of sympathy. I'd expect "apprehend" to be parsed simply as a synonym for "comprehend" without any implication of "reprehend".

    Oh boy.

    Did you miss an entire year of deliberately taking the Alt-Right Memes, taking them seriously (at least at the casual level) and reflecting them at the world then eating the bile / hatred bit and trying to cast them into a newer, fresher, better light?

    You know, it hurts our kind a lot to do. [Note to Greg / Martin / Male CISWHITEHETEROANGLOs etc - the stuff you didn't see was a whole lot more painful, so, Would You Kindly step back before we have to educate your sorry asses. And no, grow up: we really did do all that pain, fucking outrageous levels of psychopathy to attempt to turn it into a "ignore the MAD ONE, S/HES LIKE A TROLL". Seriously - it'd barely get a pass on /pol/ let alone from your Minds[1]]

    It's not an accident that if you do a GREP, you'll spot a lot of now current Political Actions. Derp. That's what we were prepping you for, so you didn't get SALMON SPANKED when it happened.

    Literally thought this was obvious.

    [1] Yes, re-read that. We did just M-3 prove that 4chan /pol/ and CISWHITEPRIVELEDGEDMALES are the same when it comes to certain Mind Schema. Thanks for playing. picture of seal clapping

    I ain’t no tourist

    TL;DR

    Understanding has nothing to do with sympathy. Empathy has nothing to do with agreement. Ideology and Identity can over-lap, but frequently do not.

    You're literally children.

    463:

    You're literally children

    No

    We are sane adults. We do not need "educating", either. What that makes you is a moot question. (Only replying because I was specifically named in the message.)

    464:

    The restriction that Ada functions only had "in" parameters was lifted in Ada95.

    {LOUD SCREAM} That is the fundamental difference between a function and a procedure in every (procedural) language except C!

    465:

    Anyway, the joke went something like this:

    • BASIC assumes that the programmer is a beginner
    • PASCAL assumes that the programmer is an undergraduate
    • C assumes that the programmer is an adult
    • ADA assumes that the programmer is a hardened criminal.
    • If the programmer is a "C" programmer, ADA is probably right (see #403, #465).
    466:

    {AOL}I agree{/AOL}

    I have similar experiences and views of "greed scameras".

    467:

    I think the original Atari ST prototypes had 256kb, the first release versions had 512kb. The OS on ROM was 192kb in size, the same as in the Mac.

    The Atari ST and the original Mac were similar enough that it was possible to boot up the ST using an external ROM cartridge with Apple Mac ROMs fitted -- Apple were, ahem, "proactive" in shutting down people who tried to sell copies of the Mac ROMs so the market was limited to selling empty cartridges with an explanation on how to get ROMs out of a real Mac. A lot more people just copied the ROMs using a EPROM blower and returned the original ROMs to their Mac-owning friend.

    It was worth doing for some folks since the ST ran faster than the Apple unit and had a better bigger display (with a colour option) and a two-button mouse while being cheaper to purchase making them attractive to graphics artists and the like.

    468:

    fundamental difference between a function and a procedure in every (procedural) language except C!

    I don't think that would make C better, though, because of the lack of a generic collection type or the ability to interrogate types. Functions that can only return a single primitive type are a bit limited, but in C you have to declare explicitly every combination of primitives you want to use (structs). It took long enough for C to get assignment of structs, for crying out loud. function DoThis(on these things) returns (status, number, etc) but not in C, guv, nor Pascal. It's bad enough trying to fake it in C++.

    But bob help me, exceptions in C++ are the opposite of first class elements (mind you, in C++ objects are barely first class elements of the language - there's no "object" per se, being an object is merely an attribute of a type). {bangs head}. At least in Object Pascal objects were actually built into the language and worked more or less like objects in actual OO languages (Smalltalk, Jade... which is an OO database language from the early 1990's). Ahem.

    FWIW I'm liking Rust the moment, if only for toy problems while I try to get my head around the language. It solves a bunch of thread-related problems on a way that feels quite natural to me, I just have to learn a bunch of other new tricks in order to use the language.

    469:

    "OOC, do all programming languages round numbers the same way?" "No. Nor do all compilers, all hardware, all compilers for the same language on the same hardware, or even different options for the same compiler for the same language on the same hardware."

    Nor all interpreters. Not sure if the java runtime still supports the options it used to have re precision.

    But I think some people are being a wee bit precious about money precision. I design systems for financial maths and we do fine with just 19 significant figures.

    As for modelling money being the sort of thing a CS degree should teach: hell, no! Teach the kids theory at uni, with just enough practical work to let them understand the concepts. Bitter experience will teach them practicalities.

    470:

    And Assembler assumes the programmer is none of woman born...

    471:

    Picking up on the "Scottish Play" quote ... Whom then should one fear?

    472:
    I don't think that would make C better, though, because of the lack of a generic collection type or the ability to interrogate types.

    Clearly, you're not prepared to make like awful enough for yourself (or your future self, or whatever other maintenance programmer comes along). Linked lists of tagged unions FTW! (remember that some poor schmuck probably had to do exactly that to facilitate your fancy high level language's abstractions).

    The underlying problem is the lack of (compile time) convenience features that would make such a thing palatable. Things like list comprehensions and pattern matching (in the functional programming sense, not the regex one) or perhaps algebraic data types.

    FWIW I'm liking Rust the moment, if only for toy problems while I try to get my head around the language. It solves a bunch of thread-related problems on a way that feels quite natural to me, I just have to learn a bunch of other new tricks in order to use the language.

    I haven't looked at rust for quite some time. Two years maybe? I remain hopeful that it'll be the future of low level programming languages. I remember spending quite a lot of time fighting with the borrow checker, but the combination of memory safety with raw pointer access is a very useful one.

    (oh, and backtracking a few topics: rust provides safe pointer arithmetic too, albeit with enough syntactic salt that you might not want to use it)

    473:
    Another problem I've found is that not all engineering fields put strong emphasis on teaching programming (snip)

    Doesn't even need to be engineering... I know a bunch of biologists and psychologists and an astronomer who have all ended up doing a lot of software development, more or less having to teach themselves some quite complex stuff from scratch. It can be done but the end result is often not very pretty.

    The worst thing by far is the Very Complex Spreadsheet. A horrifying mess of incomprehensible spaghetti code (dysfunctional reactive programming, if you like) upon which whole projects, departments and businesses are built.

    The programming required during my Undergraduate involved nothing much more rigorous than functional programming in Java and MATLAB. None of those classes even introduced the concept of pointers.

    Much as I'd love to bang the Real Programmers Wrangle Raw Pointers drum, most people just don't need to. You can even write and use quite complex data structures without necessarily needing to think too hard about the difference between values and references.

    474:

    It is likely that there was more than one, and that might have been a copycat. I heard about it in the early 1970s, referring to the UK; there were quite a lot of similar incidents (by the banks themselves) at the time, because the laws had not been updated for computerisation.

    475:

    Er, no. You can argue that it should be, but functions that can change their arguments have been common since the 1950s (including Pascal, I believe), and most of the languages get 'in' arguments (and functions) wrong, anyway. That is because the actual requirements for functions are one of:

    No side-effects: all arguments must be 'in', that property must be 'deep' and it must not call any potentially state-changing procedures. Fortran PURE and (if I recall) Ada get that right, but Python is horrible.

    Parallel/asynchronous safety: as for no side-effects, but also it must not depend on any external or shared state that may change during the call, or call any function/procedure that may.

    Full purity: As for parallel/asynchronous safety, but it may not depend on any external or shared state that may change during its lifetime.

    476:

    Spot on. In one of my 3-hour 'transferrable skills' courses, I mentioned/described forms of arithmetic, as well as how to use one form for another, that went WAY beyond any CS degree I have ever heard of. GF(n)? Quaternions? Fixed slash? Not a problem, if you work out the actual requirements of your target arithmetic and the guarantees of your implementation arithmetic, and proceed from there.

    477:

    I know a bunch of biologists and psychologists and an astronomer who have all ended up doing a lot of software development, more or less having to teach themselves some quite complex stuff from scratch.

    This was kind one of my cues to to get out from astronomy: I did my Masters as a research assistant in astronomy, the subject was a data-handling program for a probe to be flown. After I graduated, I realized that I had written complete crap and re-wrote the system in the next three months. It didn't help that all the people who helped me with this were in different countries. Also my attitude when doing the basic software engineering methodology class in the university didn't help - I thought it was all useless crud.

    After doing this for a while more while trying to figure out a proper PhD subject, I realized that this is not what I should be doing, and I got a job. The job was in software engineering, but I had brilliant co-workers and a better attitude. I like to think I got to be a mediocre software engineer instead of the bad one I was.

    478:
    I consider the trend of allowing people to use any kind of device with a screen while driving is misguided

    Especially high-information-density ones like maps.

    Most of them can be switched to a mode where the map is hidden, which is probably rather safer; at that point, it's just your own portable directional traffic sign, telling you the next turn you need to make. Quite possibly it's even a net advantage, because you shed the cognitive load of navigation...

    479:

    Everyone's always so down on assembly language. I love programming in assembly language. Though I mostly program in C, which is mostly indistinguishable from assembly.

    My students hate assembly though, and always want to know when they can forget it. It's tough to go far in computer engineering though if you only want to deal with black-box abstractions and refuse to know what's really going on.

    To bring things back full circle, in general my school has strong undergrads, but even so, in the embedded systems class I teach there's only maybe 5% of them that I'd trust writing a critical application for me. It's partly why I dread that my 20 year old car needs replaced, and why I usually don't carry a cell phone.

    480:

    in general my school has strong undergrads, but even so, in the embedded systems class I teach there's only maybe 5% of them that I'd trust writing a critical application for me.

    But you only ever see them as undergraduates (during a phase in their life when many are discovering a social life away from home, and learning life skills other than classroom studies). I get to see them as young (and eventually, not so young) engineers, which brings us back to one of the points that I was trying to make @249 - namely, that an undergraduate degree in any subject is not a certificate of professional competence.

    It's an education; a CS degree allows you to start the "training" phase of your professional development. Anyone who believes that a new graduate should be capable of immediate professional competence, has rather missed the point. We don't complain that a law degree isn't producing fully-competent lawyers; we don't complain that a medical degree isn't producing fully-competent surgeons; and we don't complain that a student who gains their Pilot's Licence isn't fit to take on the left-hand seat of a passenger jet. But in this thread, we see multiple complaints that new CS graduates aren't up to the task of production-quality code...

    481:
    Sadly, even when I go into settings and say "public transport, bike, walk" as my travel options it can't do mixed mode journeys. There's no "ride to the train station", in other words.

    I find TripGo plans reasonable mixed-mode journeys. It doesn't do turn-by-turn, so you'd have to combine it with another app for that part.

    Then there's rome2rio, which is more of a desktop thing than mobile, and covers both local and international travel; sometimes comes up with interesting options, although probably more as inspiration than concrete planning.

    482:

    You are misrepresenting about half of the remarks, which are that new CS graduates actually take LONGER to train up to being competent than graduates from other disciplines, because they think that they know how to program/administer/design/manage computer projects.

    483:

    Those who will not study history are condemned to repeat it. I started in machine code, have done a LOT of (often low-level) assembler programming, and am one of the people who was involved in the migration from assembler to higher level languages. While I could explain why that was (and is) desirable, with some observational data, I don't think this is the right forum.

    484:

    Curiously, I've never heard that graduates of any other subject are worse in their field that people entirely ignorant of it.

    Either computer science is unique in damaging the abilities of those who study it, or there's a systematic bias against CS grads by those who might employ them.

    Probably the former. People in the software (or firmware) business would never make unreasonable or unfounded sweeping statements in public after all; that would be irrational.

    485:
    It does better than google at finding actual bike routes, except that it also includes as options routes along things like Parramatta Road and Rookwood Road (two/three lanes each way with centre divider) as preferred bicycle routes (those were options one and two, but option three was along the actual official bike route).

    Hmm, last I tried the official bicycle route parallel to Parramatta Road, along Gipps St, it had flowerbeds blocking the bike lane, forcing the rider to either dismount or swerve into the one lane of heavy traffic... maybe it's not the apps that are at fault here.

    Between that, bike paths randomly blocked by street signs and bus shelters, and the one that ends with a tall staircase and a choice of busy steep city streets, I wasn't very impressed. There are lots of nice bike paths locally, but they don't seem to join up into a commuting network?

    486:

    As I remember it, the Atari ST machines came in two flavors: 520ST and 1040ST. They shipped with 512 KB and 1024 KB, respectively. They also shipped with MIDI ports standard, which is why I bought the 1040ST.

    I was on the phone to Atari the day after they announced the machines, to confirm the part about the MIDI port.

    The biggest problem in Dallas-Fort Worth in the early 1980s was finding a shop that sold the Atari ST machines.

    487:

    an undergraduate degree in any subject is not a certificate of professional competence. It's an education; a CS degree allows you to start the "training" phase of your professional development. Anyone who believes that a new graduate should be capable of immediate professional competence, has rather missed the point. We don't complain that a law degree isn't producing fully-competent lawyers

    In Scotland a law graduate must undertake two years further training in a law firm to become a solicitor, and thirty months to become an advocate. Without this they cannot practise law. Similarly for the other professions. Unfortunately there is no such system for programming and past attempts (by the BCS for example) to introduce one failed due to the chicken-and-egg problem of there being almost nowhere a graduate could train under an FBCS to qualify.

    Given the choice between graduates with no experience and experienced programmers with no degree the experienced programmer may seem more ready to be immediately productive. There is negative productivity...

    488:

    See #241 and some of the responses to it.

    489:

    There were a few 260STs built before Atari decided they were too limited, still, more than the STbook (With a trackpad-like object, first, I think.). In many ways, the Atari ST was like an ultimate Commodore 64, the Amiga being the Atari 800 of the Gods, for reasons more entertainingly given elsewhere.

    490:

    Either computer science is unique in damaging the abilities of those who study it, or there's a systematic bias against CS grads by those who might employ them.

    Allow me to offer a different perspective.

    Usually the main hurdle in a sw project, no matter the size or topic, is to get an agreement on the actual requirements. CS graduates do not (I am generalizing here, because I have been exposed to a CS course in my country only, though, so feel free to correct me on this) get any specific training to deal with that. They may be very good at creating sophisticated, efficient, elegant, architecturally sound designs... based on a clearly and unambiguously defined problem. So they are maybe closer to Theoretical Physics than Engineering.

    So now ask yourself this: you need to automate some process or calculation in, let's say, Astronomy.

    Do you think it would be more efficient (in terms of getting things done) to get someone who understands Astronomy (and therefore understands you explaining him or her your problem) and have this person put together a hack with Excel or Perl or Visual Basic, or would you prefer go through a series of meetings with a brilliant CS graduate who never seems to really understand that of course when you say X you imply also Y and Z, as they have taught you in Astronomy 101, and insist instead on first defining a rigorous Object taxonomy for all the things you want to calculate, and who the hell is this Liskov they keep mentioning? Maybe some Russian astrophysicist you never heard before?

    491:

    Actually:

    C assumes that a programmer should be able to do anything he wants to do, and makes it easy.

    Ada observes that, 99% of the time, when a programmer attempts to do certain things, he is making a very bad mistake, so they make the programmer jump through a hoop for the 1% of the time that it is the right thing to do. The hoop warns the original programmer that he is almost certainly making a mistake, and it warns the maintenance programmer that Here Be Dragons.

    Ada is very unusual in the world of programming languages in that it rewards programmers who stop and think about their design before they start slinging code. MANY Ada neophytes, who were experienced programmers, have remarked favorably on their observation that, once they managed to push their code through the compiler, and correct the errors, it generally ran perfectly.

    One industry greybeard has repeatedly pointed out that, with a LOT of extra work, C can approach the reliability routinely achieved by Ada. He and I tend to agree that this is an indictment of the American software development community AND the education system.

    Many years ago, an acquaintance of mine from BIX got stuck with a metrics exercise. He went data-mining on Pratt & Whitney timecard and bug-tracking data, on jet engine controller software projects, and discovered that Ada gave double the programmer productivity and one quarter the defect density of any other language in use at Pratt & Whitney at the time. It initially showed up as a delta between military and commercial projects. When he went digging, he found that the only statistically significant difference between the two sides of the house was Ada. He reported the results up the food chain, and told us that upper management, upon seeing it, realize that there was MONEY in those results, and, for a while at least, mandated that ALL future jet engine controller projects be done in Ada. I do not know whether that is still in force. When BIX went away, I lost touch with a lot of people.

    Full Disclosure: Unlike most of the people who scream and run for the hills at the mention of Ada, I've actually worked in the language, on real systems.

    492:

    "C assumes that a programmer should be able to do anything he wants to do, and makes it easy."

    There's a common misconception that C is a high level language. It's actually more of a machine independent assembler...

    493:

    There's a common misconception that C is a high level language. It's actually more of a machine independent assembler...

    I've heard the quip that it combines the expressiveness of assembler with the clarity of assembler.

    494:
    Allow me to offer a different perspective.

    Perhaps first you could point out where I suggested that domain-specific problems should not be solved by domain experts?

    495:

    Full disclosure - I've worked in Ada as well. I've also worked in BASIC, COBOL, Pascal, C and "visual Basic".

    As you say, once you get Ada to compile it mostly "just works" (and the biggest issue I've had with it for the last 20 years has been an assumption (not mine) that input real time data from sensors will be clean).

    496:

    ... the Real Programmers Wrangle Raw Pointers drum ...

    REALLY real programmers wrangle raw pointers ON the drum.

    497:

    Eh? The dash is down, insofar as it's not in front of the road — you have to look away from the traffic to see your speedo, etc!

    Ah, perhaps it is a terminology problem. I'm advocating placing the GPS display on the horizontal shelf at the lower edge of the windscreen/windshield (whatever that's called). That way only side-to-side eye motion is needed, and not much of that -- the road ahead stays in the field of view at least peripherally. As for the instrument cluster behind the steering wheel, aside from very occasional checks of the fuel gauge, the important instrument that needs frequent downward glances is, indeed, the speedometer. But the GPS usually has a speed indicator that's more accurate than a lot of built-in speedometers.

    498:

    Perhaps first you could point out where I suggested that domain-specific problems should not be solved by domain experts?

    That's the point: CS graduates are domain experts in Computer Science, not in "Practical Application of Programming To Solve Real-World Problems". This is, IMO, the reason there is a sort of bias against employing them when you need to solve something via computers.

    499:
    This is, IMO, the reason there is a sort of bias against employing them when you need to solve something via computers.

    And yet it is not what Elderly Cynic was talking about, and it was his post I was responding to. It is perhaps more closely related to the issue that Ioan was talking about which I responded to separately.

    500:

    Ye gods, that takes me back! I didn't actually do that, because the programs I was working on had only one level of data structure, but ....

    501:
    upper management, upon seeing it, realize that there was MONEY in those results, and, for a while at least, mandated that ALL future jet engine controller projects be done in Ada. I do not know whether that is still in force

    Outside of safety critical and real time systems, the penalties for writing terrible software are often light, if not entirely absent. Therefore, the things that affect costs are availability of people capable of writing in the crazy moon language that your company requires, and the price (and capabilities) of the tooling.

    To state the obvious: if there aren't enough jobs for language X, people will retrain and go elsewhere. If there aren't enough people who can write language X, those that can will charge you more and you may simply be unable to find enough of them. If language X is unpopular, the quality of its tooling will lag behind its more fashionable counterparts which will probably end up cheaper, faster and generally better.

    In the safety critical world you end up with stuff like MISRA-C and High Integrity C++, attempts to constrain commonplace languages and developers rather than specialist languages designed with safety in mind. Ada might be a better choice than either of those in many circumstances, but if it isn't economical who will bother?

    In the regular safety-ignoring world, you just end up with things like node.js instead of erlang, because no-one wants to do things right if they can just do using all the latest cool toys instead ;-)

    502:

    The company I've been working at prefers to train SysAdmins as coders to training up CS grads for the most part.

    A lot of what a CS degree teaches is only useful for CS research, specialist programming, or a project role that involves actual design. Grunt programming standard enterprise stuff doesn't have a use for knowing the ins and outs of algorithm families, the ability to prove an algorithm will both terminate and be correct, how to write C, assembly, or LISP, enough detail on the various fancy data structures to write them from scratch (How many of you are that familiar with red-black and k-d trees?).

    For those of you who enjoy writing Assembly, I recently found a game, TIS-100, described by its creator as "The assembly language programming game you never asked for!" That tagline fails to include that it's a parallel assembly language programming game. Remember to read the manual, as it's in-verse!

    Also, hurrah for the preview button; that link was broken the first time I tried it, cuz I forgot the https: //, so the page thought it was a relative path. I'm not so good with html, and my google-fu seems to be on the fritz. Anyone know hot to print https: // without the space without the browser interpreting it as a link?

    504:

    https:& #47;& #47; (with no spaces after the ampersands) works.

    505:

    Some schools have been getting better about that. Mainly, the ones with connections to the industry so they actually get feedback. Mine had a mandatory course on that. Looking back, it should have gone farther with it, but on the other hand, at least it had one, and at least it was mandatory. Of course, they were using it as the intro to project lifecycles and the joys of random teams (most other projects are individual), and it really should have been a multi-semester thing for all it was trying to do, but hey, better something than nothing.

    506:

    But you only ever see them as undergraduates

    True, but my experience in industry is they often throw the "boring" tasks (like IoT security) to the new fresh-from-college interns.

    As a computer engineer I find the discussion of the value of a computer science degree to be amusing, mostly because of the traditional rivalry.

    What's the old quote about Computer Science being neither about computers nor science? At least in my experience in the US it's really a fancy maths degree where they might take a few java electives.

    507:

    I've worked in FORTRAN (both FORTRAN IV and FORTRAN 77), various assembly languages, LISP (relatively briefly), Ada, C, C++. I've had to read JOVIAL and PASCAL. A long time ago I did a #include processor in EMACS macros (!). I've used BASIC non-professionally, and I learned COBOL (and once helped train a math Ph.D. in COBOL, but that's another story), but I've been very careful to keep that word off of my resume. And I occasionally use AWK for very short things that need doing on the fly. I've been known to hack quick sanity checks or number calcs in GNU EMACS LISP. (My .emacs file has a LOT of short functions for doing electronics calculations.)

    And I've played with FORTH now and then.

    And I keep saying I want to do some serious playing with Oberon (language AND operating system).

    And I keep wanting to spend some serious time hacking LISP.

    And I feel your pain about noise on sensors. I impressed the bleep out of a TA in a robotics class once when I tossed the traditional first-order derivative approximation for a PD controller and hand-rolled a second-order approximation. The resulting controller was obviously and DRAMATICALLY better than the traditional approach. I'd taken the numerical methods class over in the Math department the previous semester, and it was an obvious thing to try.

    508:

    And I feel your pain about noise on sensors.

    I had a noise problem on a position sensor readout once -- the data was presented on a 16-bit parallel bus. What the manual didn't say was that the device had been developed from an 8-bit bus version with an new wider interface hacked onto it. As the lower 8-bit reading rolled over from 11111111 to 00000000 there was a delay of a few microseconds before the upper 8-bit byte incremented (and vice versa in the opposite direction). There was no Data Valid strobe... well there was after I had laid hands on it although I had to implement it myself externally as this was a very expensive ruggedised sensor head. Basically I monitored the outputs looking for the rollover condition in both directions and checked the upper byte had incremented or decremented properly before my hack pulled the *DV line low to signal the data was valid.

    509:

    I started at Philly Commmunity College. First class was a pseudo-assembler, with 13 instructions, which included add and subtract.... 36? 39? of us started the class, three of us considered it a Mickey Mouse class, and we, and 10 others, who were desperately treading water, were all that took the final exam. Then, to make sure you really wanted to learn to program, the school insisted that your second term was BAL (IBM mainframe assembly). I still have that textbook - if I could have gotten the rights to it, I could have put big pharm manufacturers of sleeping pills out of business.

    COBOL, and the optional Fortran.

    Then I started working. Quickjob, a proprietary report writer... and the colleges real software was in PL/1. On, and a lot of CICS. Only a small amount of COBOL. Next just had COBOL, and then I got put on the team with the brand new PC's (1983, this is). "We're going to learn C!""No, compiled basica".

    Next job, COBOL & CICS, with occasional short spurt of RPG (AUGH!!!). Then more COBOL.

    Taught myself C from K&R. I always viewed that as "ok, you know how to program, here's a new language and how to use it", which I still adore, and finally C. And Pascal, for the M68000 co-processor.

    About Pascal: I read that Wirth created it as a teaching language, and that commends like println were bolted on....

    Then into Unix and C, finally, the beginning of '91. And awk - short awk scripts? Wimp. I worked on a project were we would tell all these sources the format of the data they were going to send us, and meanwhile, I wrote a d/b loader in C. Then all the sources said, sorry, no budget, take what we've got, which was when awk saved my butt. I had about 30 scripts to reformat all that data... running 100-200 lines of awk.

    I love C, but there's associative arrays that I adore....

    mark

    510:

    About 40 years ago, I was talking to our HoD (a well-known name in CS) and said the ideal would be to write a project in the first term, someone else (unknown) modifies its objectives in the last, and they share the marks. He said that he would very much like to do it, but there wasn't time; now, with 3-year courses rather than 1-year, it would be technically feasible, but the political problems would be significant!

    511:

    Associative arrays? The only proper way to do them is Snobol. You've got associative tables:

    animal_sounds< "cat" > = "miaouw"
    animal_sounds< "dog" > = "woof"

    But you can also do string indirection through variables:

    cat = "miaouw"
    animal = "cat"
    cat_sound = $animal

    You can even do string computed goto's that way.

    512:

    While we're on the subject of programming languages, thought I'd pass along a quip from a programmer friend who once grappled with APL: "APL, the world's first read-only programming language, is an initialsm for 'APL Programming Language' -- it's a recursive initialism."

    About in-car navigation systems, it's worthwhile noting the reasonably well-established rule of thumb from cognitive science: the brain has a limited amount of bandwidth in any given context, and you have to divide that bandwidth among whatever tasks you're attempting to accomplish simultaneously. The problem with nav systems -- and with phones, the radio, conversations with carmates, and trepidation over the root canal you're driving to, among other things -- is that each takes up a certain amount of bandwidth. When you try to exceed the amount you've got available, one or more tasks must give up some of their bandwidth to permit this. If that benevolent task is "paying attention to the road", things won't end well.

    Context-switching is also an issue within a given set of sensory inputs (e.g., from visually parsing a map display to visually examining the real world through the windscreen). Metaphorically, while you make the switch, it's like rebooting the program that you're shifting to. Mixed-mode communication seems to work better; for example, when my wife and I use GoogleMaps to navigate, we primarily listen to the spoken narration and ignore the map. If things look to be complex, the passenger in the shotgun seat will provide elaborations (e.g., "500 m means the second intersection on the left") when necessary. Works very nicely indeed, and rarely steers us astray.

    Need to track this down again, but I recall a recent study suggesting that although you can improve your ability to multitask, there's still a fairly firm hardwired limit: if you try to accomplish multiple tasks simultaneously, you'll never reach the performance level that's possible if you focus on one task at a time. Makes sense.

    514:

    A new CS grad is used to being the smartest person in the room. When newCSgrad meets the guy who was the smartest person in the room AND has twenty years of experience... newCSgrad's faults become manifest for the first time in his/her life. It has to be interesting.

    515:

    Re the original post, saw and (quick) read this today: WALNUT: Acoustic Attacks on MEMS Sensors pdf: WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks It's not quite as good (bad) as it sounds but that's just because they didn't find (or didn't admit to finding) an exploit to break into a system with it. At the software system level, our experiments demonstrate the ease of injecting acoustic interference into an Android smartphone’s accelerometer to take control of an app that drives an RC car. We also demonstrate a proof of concept end-to-end acoustic attack by injecting 3,000 steps per hour into a Fitbit. The results confirm our concerns that system software does not adequately validate the integrity of sensory data—blindly trusting the output of sensors by default. ... We assume that the attacker is able to induce sound in the vicinity of the victim device, at frequencies in the human audible to ultrasonic range (2–30 kHz). ... ...or by a drive-by ditty where a user is tricked into playing malicious music either by email or a web page with autoplay audio enabled. (Bold mine.)

    Walnut (as in "brain the size of a walnut", a reminder that we all have bounded minds) is one of the nicknames for the cat of the house, so there was an added incentive to read the paper. :-)

    516:

    That's a solution to my findings working with graduates, they've little or no experience of working on other peoples code. A common first few weeks activity round here was "Here's read only access to our bugs database" followed shortly after by "Here's checkout access to the source code repository, fix bugs X, Y and Z". Bugs assigned by a supervising engineer who had a good idea of how to fix them already and would check over a critique proposed fixes.

    517:

    Much of the mathematics is pretty fair crap, too, unfortunately. An actual example:

    Them: [Some bogus claim] because ... the law of large numbers.

    Me: You know that doesn't always apply, don't you? In particular, it doesn't apply to distributions withput a mean.

    Them: Those don't occur in real life.

    Me: Oh, yes, they do. Consider the distribution of 1/X where X is U(0,1]. And effective disposable income is well-known to be best modelled by distributions without means.

    Them: That's irrelevant, because all representable numbers are bounded.

    Me: !!!!!

    518:

    A new CS grad is used to being the smartest person in the room.

    Which room? We were being lectured by Turing Award winners, people whose lecture notes had become textbooks and then set texts on more than their own course. So; maybe the class medallist could walk out smugly (ours didn't, although perhaps he deserved to), I certainly didn't - by graduation I had discovered women, beer, folk music, represented the University in a couple of different activities, done most of the work towards my commission... and got an acceptable grade of degree which reflected those distractions ;)

    It keeps coming back to how you and your firm introduce the new graduate into the world of work. If you make no real effort to mentor, or train, then congratulations - you'll get rubbish engineers with a side dose of arrogance. No sh*t, Sherlock.

    If, on the other hand, your firm and your team leads/mentors make an effort with the new graduates (delivering effective training, allowing them to gain experience under supervision, giving confidence boosts and humility lessons as appropriate) then you get to see the payback in a couple of years or less.

    Occasionally, you need to deliver the humility lesson to an experienced engineer (been there, done that). It took a week or two for it to run its course, but I did it gently enough that he was persuaded rather than angry. More common, IMHO, was the need to build up confidence (done that - two of my software apprentices went on to far more senior positions that I ever held). The fact that I was doing both of these activities at the same time, within the same team, made life... interesting ;)

    519:

    Just remember to not allow any wires through the cage that could be used to carry signals......actually that is a slight flaw in the plan.

    Personally I'm fairly relaxed about smart meters of the remotely readable variety. I already have a digital meter that I give a reader access to on demand. Them having access the rest of the time does not change the info, just how up to date it is.

    On the other kind it is very easy to get drawn in. The upside is, of course, huge, once the dryer freezer, fridge and washing machine are connected. But the grids engineers get out of jail free without spinning reserve demand shaping is the hackers trash the grid tool. But the engineer in me thinks that is a solvable problem.

    520:

    I find TripGo plans reasonable mixed-mode journeys.

    Sadly not for me. TripGo doesn't suggest bike+train or give me the option to demand it, and it also misses the option of walking a little further to avoid changing trains.

    Its bike route is not very good, it avoids the three lanes each way divided major road in favour of a two lanes each way divided road. Which is better, but it uses that road to get around the Rookwood Necropolis which is a major bike thoroughfare. Rookwood links to the Cooks River Cycleway which TripGo also avoids.

    Still, at least it's possible to ride from home to work. I also tried Circular Quay to Bondi Beach and it couldn't find a bicycle option at all. So "does it take me through the park or round it" turned out not to matter. I fear TripGo hasn't really got Sydney working yet.

    522:

    fficial bicycle route parallel to Parramatta Road, along Gipps St, it had flowerbeds blocking the bike lane

    I assume that's Concord/Canada Bay? But that seems to mostly just have dead bikes painted on the road so I'm not sure. But yes, Sydney has a great deal of half-arse bicycle infrastructure.

    The trick is to submit to council when they're designing the stuff and don't be afraid to make very detailed suggestions with explanations. I have had some luck getting councils to acknowledge that not all bicycles are lightweight toys travelling at 10-15kph that can be easily lifted over or around obstacles.

    Marrickville has been undergoing a shift to "mummy bikes" with giant bins full of children and electric assist, and council is slowly adapting their ideas about bicycles to match. With those, even lifting one up a curb is a major operation, getting one up the stairs on the Sydney Harbour Bridge "bicycle path" is where you need your troupe of well muscled naked dancing boys to leap into action. Sorry, where were we? Oh yes, bicycle facilities. Mixed bag, what.

    523:

    This abomination of an "accessible bridge" for example, is being replaced with a new one. One design feature is that it will actually be possible to ride a load bike, or tow a kiddy trailer, over the new bridge without spending ten minutes wiggling the thing around the unnecessary U bend shown above.

    Sadly google street view has not bought the bikes out to Sydney yet so we don't have easily accessible footage of the bike paths that I can link to. But my submission to their draft bike plan ended up having 30 sections and over 5000 words. SydneyCyclist also got excited. I got a reply to my first submission asking for more information on several points, so I suspect they read every submission. But I covered everything from "the anti-bicycle barriers in this alleyway are annoying" through to "lift the bridge over Cooks River at Canturbury Road a metre or two and put a decent bicycle path under it, the current thing is awful except when it rains, at which point it becomes unusable due to the substantial jet of stormwater from the road that crosses it about 1.5m off the ground". You're probably never going to get Google street view of that because their trike couldn't fit through. A bicycle barely fits through.

    524:

    Then you would be at least partly wrong. On the hardware side note that while China puts together much of the world's consumer electronics 99.999per cent of the hardware design complexity is in the chips. China is the world's largest importer of semiconductors. They spend more on silicon than they do on oil, a fact the Chinese government is budgeting billions to try and rectify.

    The silicon in your smart meter is more likely to have been designed in Edinburgh than most places. (Google metroic, now part of adı).

    As for getting you firmware from the lowest bidder in bangalore I doubt that would make sense (or rather they may be in bangalore, but they won't be cheap unless you are really dumb). Firmware on a chip requires in depth knowledge of the hardware and is resource constrained. Apps you allow the user to run are a different matter.

    525:

    if there aren't enough jobs for language X, people will retrain and go elsewhere. If there aren't enough people who can write language X

    As a mostly -former Delphi programmer, I can testify to that. I have also be paid to use Ada, Jade (the Cardinal OO database one), Eiffel and probably others. My CV does tend to emphasise flexibility as much as depth because I find that that gets me the interesting jobs.

    I admit to making good money over many years from "I can haz threads... arrrgh, y u no worky" type programmers. Including my current job, where I got to say "how about first we fix the 2000-odd compiler warnings and hints, then start looking for other bugs". By which I meant buying a code analysis tool and running that, then fixing the 10,000 issues identified (most of which were fairly trivial, but you fix them and then suddenly you're looking at "you have three global variables called DiskCache, is that wise?" I have to focus on the dollar signs otherwise I cry.

    526:

    If, on the other hand, your firm and your team leads/mentors make an effort with the new graduates (delivering effective training, allowing them to gain experience under supervision, giving confidence boosts and humility lessons as appropriate) then you get to see the payback in a couple of years or less.

    Yup. It can be difficult when you're the consultant/ contractor to do this, but the payoff can be huge. Having vaguely competent minions is very handy, and when they're permanent staff who are friends with the boss it makes some of the more unusual demands flow more smoothly than usual (I tend to start early most days and then leave about lunchtime on Friday, for example). Sadly "buy me decent tools" is often the first unusual demand.

    On that not, I recommend 4k monitors, even only 28" ones, because the extra pixels do make a significant difference to how readable the text is. With 4k I can get easily readable text on two pages side by side with about 90 lines by 200 characters on them (in a code view), or two A4 pages side by side in a WYSIWYG document editor. It is playing havoc with the line lengths in my code, admittedly (80 characters? I have a ruler there, it's not even half way across the page and there is lots of code to the right of it. Or maybe that's 100 characters... I should care, I suppose).

    527:

    As long as there's a minor thread going on about GPS, let me ask about accuracy.

    Casually following this over the years, I've found that consumer-grade GPS + WAAS gets into the < 10 meters range fairly reliably (with exceptions). WGS + WAAS + GLONASS, like my Moto G phone has, is noticeably better, 2 meters or thereabouts, maybe a little better. I've frequently gotten coordinates off the phone, put them into Google Earth, and landed within stepping distance of where I actually was.

    So what do y'all experience where you are?

    528:

    Then you would be at least partly wrong. On the hardware side note that while China puts together much of the world's consumer electronics 99.999per cent of the hardware design complexity is in the chips. China is the world's largest importer of semiconductors. They spend more on silicon than they do on oil, a fact the Chinese government is budgeting billions to try and rectify.

    The silicon in your smart meter is more likely to have been designed in Edinburgh than most places. (Google metroic, now part of adı).

    As for getting you firmware from the lowest bidder in bangalore I doubt that would make sense (or rather they may be in bangalore, but they won't be cheap unless you are really dumb). Firmware on a chip requires in depth knowledge of the hardware and is resource constrained. Apps you allow the user to run are a different matter.

    529:

    let me ask about accuracy @ 528

    I have no idea what happened to that message, but the idea was that I've found that GPS + WASS + GLONASS gets 2-ish meter accuracy, vs 10 m without GLONASS.

    [[ It's HTML - if you type an opening <, it assumes you're starting an entity and tries to use the rest of the paragraph on it. Type &lt; instead. - mod ]]

    530:

    I am not an expert, but that sounds about right, given modern equipment and the lack of interference.

    The old advice used to be, don't use GPS on the summit of Ben Nevis in a whiteout, because it would likely lead you over the edge. Now though, probably not.

    531:

    Not relevant to Australia, but have you seen CycleStreets? They seem to be collecting data about cyclable routes, so avoiding sending you across a busy road, but not looking at integrating with other means of transport. I was surprised to learn that they haven't expanded to covering other countries, but apparently the people who developed it are planning to open source it when it's a bit more mature.

    532:

    Since there's been a lot of talk about WiFi & languages, here's a thought (not represented yet in this thread):

    Axioms:

    1 All routers are insecure and back-doored by absolutely everyone( Glenn Greenwald: how the NSA tampers with US-made internet routers Guardian, May 2014 - but also anything made in China) 2 Most consumers don't treat their WiFi as anything but a dumb dongle - so the # of firmware upgrades done is probably minimal. Also, most people are unaware that most modern routers come with ~64 / 8 meg of RAM / Flash memory. (What Are Throughput & RAM/Flash Memory on a Wireless Router? Flashrouters, chosen because of the name). Users rarely turn off their routers. 3 WiFi signals themselves can be used to perform metadata predictions (Wisee uses wi-fi signals to recognise body gestures BBC, June 2013

    RF-Capture: Capturing the Human Figure Through a Wall MIT, SIGGRAPH Asia 2015)

    4 Facial Recognition is already out in the wild (Russian Trolls Find A New App For The Same Old Harassment Vocativ, April 2016) but less well known but also in the wild is gait recognition software (Gait analysis for human identification University of Maryland, PDF, technical paper) 5 At the ISP level, with all the new idiocy being pushed (Snooper's charter: Bulk internet data collection ruled illegal by EU court City.Am Dec 2016) there's a potential goldmine. That's without assuming that 5-EyEs etc haven't already trawled it all.

    ~

    So, posit:

    Everyone under the age of ~30 or so has spent most of their lives online. Anyone under 20 with a phone?

    Most of the people in this very thread probably have a box in the attic with old game systems (ZX81 and beyond) but also piles of outdated routers etc. (Well, presuming you've not recycled them).

    At what point do you consider that an automated algorithm hasn't already trawled your Unique ID / Gait / Facial Recognition data (presuming you were one of the 80%+ of people silly enough to use Facebook etc) and is just sitting there, waiting to write the story of your life?

    Now picture those mounds and mounds of routers sent to Africa to be rendered down (E-Waste Republic Der Spiegel, long-form indepth report, using video, graphs, hard data and photography. 100% class piece).

    Not so much "Ghosts in the Machine" as "Ghosts of Colonialism revisited".

    So, it's not Gibson cyberpunk, it's something else and a little more depressing.

    533:

    Yes thanks :) I read quite a few cycle type blogs, so I get references to projects like that. There are a whole bunch of those sorts of projects around the place, and that is one of the better ones. It's good too see national ambitions and hopefully it will become dominant (or get sucked into google and thus made available to everyone).

    In Sydney we do seem to have had a dump to google of council cycle facilities, but there's not a lot of analysis, or perhaps just the leap from an organised GIS dump of "what council builds" to "what cyclists think of it" is hard to capture. As noted above, some councils are dicks about cycle stuff, so you get dead cyclists painted on random roads as "cycle infrastructure", while others won't bother labelling it a cycle facility until their bicycle planner approves it as a cycle-able through route.

    Like every other cyclist in Sydney I know Fiona Campbell (BikeSaint) who is an extreme liberal in the classical sense of the term - don't lobby the bureaucrazy, become the bureaucrazy! She has somehow managed to turn the Sydney CBD into a place that you can reasonably ride a bike, albeit with active opposition from the state government (the premier is a prosperity gospel type). Me? I just attend Critical Mass every month because it beats pastafarianism as a religion and I enjoy not taking things seriously. Even serious things.

    534:

    Note: imagine a future anthropologist (a la Saturn's Children) trawling through all of this. Their response as they digitally picture (with RL mapping data across WiFi router + phone + Car data etc). This is what they did as the World Burned Down.

    Poachers break into Paris zoo, shoot rhino dead and steal its horn Independent, 8th March, 2017

    ~

    Stuff not mentioned so far:

    1 Shodan into infrastructure is an old one, but now, the physical is better.

    Oroville Dam Spillway failure Imgur, via Reddit - numerous HD drone footage shots

    Hint: California isn't out of drought, massive wet stuff on snow-pack leads to a worsening of conditions, derp.

    Tech highlight? Intern who published some of the early pictures was fired.

    2 EPA / NOAA / NASA and even weather data being restricted. Oh, and UN budget slashed.

    Yeah, that's not good.

    3 Not mentioned in thread so far, ultimately very important: critical infrastructure costs / maintenance of entire undersea cable lines (hello Pigeon):

    Real question: what happens / who owns / who is responsible / who governs the entire fibre networks that allow modern Banking / Trade etc?

    Now, if those start to degrade... well. That's an interesting one. [Note: fairly sure they'll never be allowed to degrade, but it's an interesting thought piece. What would happen if someone introduced a biological or mechanical vector to start degrading undersea FO cables? Apart from the usual sharks just nibbling?]

    ~

    Massive Spoilers there boys.

    535:
    At what point do you consider that an automated algorithm hasn't already trawled your Unique ID / Gait / Facial Recognition data (presuming you were one of the 80%+ of people silly enough to use Facebook etc) and is just sitting there, waiting to write the story of your life?

    You don't need to use Facebook (or gmail, or...) yourself to be thoroughly IDed by their systems; you only need to know someone who does. Their EULAs assert that a user is giving them permission to gather together any information the user gives them about non-users (and ID those non-users, and sell that data, with personal identity information attached, to all bidders). (Which is my biggest single problem with that type of business model, though far from my only one). A company will give you free email in exchange for full details of your life? I think taking them up on it is stupid, but I guess that's your choice. A company offers you free email in exchange for full details of my life? That shouldn't be legal.)

    536:

    Yes, yes, we're all aware of "Shadow Profiling" and the webs being cast.

    Boy, we were weaving webs when your lot was working out bronze working.

    Questions:

    1 What happens when a ShadowRun Operative uses an innocent? For 10+ years or so 2 Degradation of the web: interesting one - at what point does "FAKE NEWS" (which is, to be blunt, almost all News Media at this point, at least from the big seven US corps) start polluting your algos and hurting your bottom line? (C.f. FB and Germany and his attempts to push for at least Senate in 2020). 3 All of my names have meaning. Most of them at least on the M-3 level. Only one has been a "normal" name. What did it cost to use that "normal" name, and what did it mean?

    ~

    Ask better questions.

    537:

    Oh, and M E T A

    @Host

    Sturgeon is wearing her Red Dress[tm] that gets covered by the Daily Fail constantly (Have you not got anything else to wear, Nicola? First Minister Sturgeon dons her favourite red suit twice in a week (and it's a very familiar outfit) Daily Mail, 13th March 2017)

    They've covered this Red Dress shit since before 2015.

    But now with an NICE ILLUMINATI / EYE triangle cut into the neck line.

    HOW MORE OBVIOUS CAN IT BEEEE?

    Political Symbolism, 101 - oh look, Turkey is accusing Denmark of being involved with Bosnia/Serb genocide. No really, go look it up.

    Seriously.

    Mad Max: Fury Road YT: Film, opening sequence, 5:40

    ~

    Oh, and you owe me £150 for all that Dante / Medical prognostication paying off - GOP is burning it down, 24,000,000 without insurance etc. The real sharks know what we know and are acting on it.

    Yep, they're literally going to do what I said they would, counting on zero-coke revolution.

    539:

    Care to link up? :-) See #522.

    540:

    Eh, I'm only here to grumble about programming languages, but reposting stuff that was mentioned in the OP seems a bit silly a best, and perhaps a little rude?

    541:
    I admit to making good money over many years from "I can haz threads... arrrgh, y u no worky" type programmers.

    The new failure mode isn't so much projects drowning in a sea of mediocre developers and managers, but instead the products of the new "gig economy" being abandoned at the moment of their creation and most (or all) of the people involved going their separate ways so there's no continuity of knowledge or understanding. This doesn't need a software archaeologist or even a software coroner so much as a computational necromancer... "we've neglected our baby and now it is dead, please figure out how it worked and reanimate it for us?" and everyone has to pretend to be happy with the shambling revenants that result.

    542:
    oh look, Turkey is accusing Denmark of being involved with Bosnia/Serb genocide. No really, go look it up.

    I see some accusations leveled at the Netherlands, and some tangentially related grumpiness pointed at the Germans, but not so much about Denmark. Linky?

    543:

    Well, I posted the list mostly to establish that I do have significant experience in monolithic, procedural and object-based software, rather than as an "experience top trumps" thing.

    I agree about (the great) awk as an "on the fly" text processor. I once highly impressed a client when they asked me for a very specific text format for their deliverable data sets, and despite it being asked for in conversation I delivered a demonstration file to him about 15 minutes later.

    544:

    Likewise, and include the "BBC News" in "I see...NL".

    545:

    Dutch, Danes, same first letter. Same number of letters even.

    Obvious innit!

    546:

    No. Clueless. Mea culpa.

    547:

    Be lion-mettled, proud, and take no care Who chafes, who frets, or where conspirers are. Macbeth shall never vanquished be until Great Birnam Word to Byte Dunsinane Hill Shall move against him.

    548:

    Back in the 80s, I was writing telecommunications software in Pascal. A goodly chunk of the Canadian telephone system ran on software written in Pascal.

    Assuming it is the same entity that I'm thinking of, I think you might actually be thinking of Protel rather than Pascal. Though Protel was heavily based on Pascal as well as Algol 68.

    549:

    Well, Wikipedia suggests (I'm not familiar enough to hold a view) that PROTEL is a superset of Pascal.

    550:

    Re: '(presuming you were one of the 80%+ of people silly enough to use Facebook etc)'

    Probably excludes most of the posters here.

    Of more concern: routers.

    I have absolutely no choice on this: I have to use the router my ISP provides. To me, this means that my ISP is okay with being sued for any breach of privacy.

    551:

    In the same sense that Modula2/3 were also supersets of Pascal. The language had introduced a bunch of constructs intended to make it possible to replace running code, but had also changed things like variable assignment (left to right, rather than right to left as is more traditional).

    552:

    Just a bit of fuzzing of political rumor mill stuff that hadn't gone live:

    Lars Loekke Rasmussen, the prime minister of Denmark, blocked Turkish prime minister Binali Yilderim in response to Erdogan’s “rhetorical attacks on the Netherlands”.

    In a statement Mr Rasmussen said that "under normal circumstances” it would be a pleasure to greet Yilderim in Copenhagen.

    Now DENMARK blocks Turkish ministers from official visit as tensions continue to escalate Daily Express, 13th March, 2017

    Denmark wants to postpone Turkish PM’s visit over referendum rallies row RT, 12th March, 2017

    Source used because Richard Desmond and RT (Moscow) are having some really strange synergies. e.g. The Daily Express being used by Americans / Troll bots on Reddit.

    One would almost suspect that being pro-UKIP and pro-Moscow were somehow linked...

    p.s.

    Dutch, Danes, same first letter. Same number of letters even.

    It's more that the core EU politicians have likely decided a rather more unilateral stance against Turkey and are in the process of rolling it out, but sure:

    Mad Old Witch is totally Bonkers!, news at 11.00.

    553:

    I wasn't saying that Wikipedia was correct; I was just saying that I didn't know whether it was wrong (and am not familiar with Modula either).

    554:

    SNOBOL, hah

    awk { mypets["cat"] = 1; mypets["dog"] = 0; mypets["newts"] = 0; mypets["fish"] = 0; } END { for (i in mypets) { print i, mypets[i]; } }

    So there!

    mark

    555:

    And I'm still boggled by the idea that anyone other than perhaps sex researchers would want a wifi dildo....

    Just because you can, doesn't mean you should.

    mark

    556:

    Ru wrote:

    The new failure mode isn't so much projects drowning in a sea of mediocre developers and managers, but instead the products of the new "gig economy" being abandoned at the moment of their creation and most (or all) of the people involved going their separate ways so there's no continuity of knowledge or understanding.

    Well, here's some better news, at least for the US: last time I was job hunting, in '09, I noted that I had not seen so many direct hires, and temp-to-perm in decades. I can directly relate that to the famous Microsoft lawsuit. Just after the turn of the Millenium, a guy who was a "contractor" for M$ in Redmond had been there for four years, in the same literal seat, doing the same thing... and sued M$ for benefits, alleging there was no difference between him and an employee. He won.

    I got tossed from my contract in '08, because AT&T had had a three year, then at least a six month "furlow"; SBC (please shove headfirst down a john) bought them, "rebranded" themselves as AT&T... and changed it to two years, explicitly because of that.

    The unintended consequence was that line managers said, "We can't afford to have someone come in, then it really does take close to a year to deeply learn all our systems... and then walk out the door."

    mark

    557:

    Um, "Dildonics" has been a rather large area of amateur research for ages now. You probably shouldn't look into fleshlights either.

    If you want to be shocked / amused / thrilled these days, you need to step up your game a little:

    This 'smart condom' will measure your performance in the bedroom Independent, 4th March, 2017

    And yes, the product is actually named the "i.Con". And no, it's not April 1st just in case you wondered.

    ~

    Post-Satire World.

    558:

    You write:

    I have absolutely no choice on this: I have to use the router my ISP provides. To me, this means that my ISP is okay with being sued for any breach of privacy.

    Yes, you do have a choice. I have my router inside Verizon's router. And I've flashed firmware on it. And it's actually locked down, and anyone in the house uses my router, either wired (I have a gigabit switch downstairs), or wifi with WPA2 with a ludicrously long passphrase.

    Oh, and I have a facepalm account, because a few years ago, it was the only way to buy someone's Worldcon membership, and which I use once in a while, because I have friends who just will not read their damn email....

    mark

    559:

    Bugger, missed the pull quote from the manufacturer:

    Many outlets have mis-reported that the i.Con’s data is automatically shared on social media, which is very much not the case. Users have the option to share their data (anonymously or, if brave enough, not so anonymously) or can keep the info happily behind closed doors. The device isn’t just a data measuring tool, it’s also there to be used to help promote safer sex.

    Tinder / Grinder, now with automatic App data pushes showing just how virile you are! Note: the creepy part is that the device claims to be able to detect STIs, which is all kinds of dubious / "disrupting"... what happens if the device (which certainly is not medically sanctioned) pushes data automatically to the cloud flagging up a partner as having a STI by accident?

    watches Lawyers drooling

    560:

    If you can avoid thinking about the specific subject of your question for a moment, and instead look at the general query:

    And I'm still boggled by the idea that anyone other than perhaps (whatever) researchers would *want* a wifi (whatever)

    Because wires are often inconvenient, bluetooth doesn't always have enough range, cellular networks require subscription and sometimes you want to control your appliances remotely.

    There. That was easy.

    561:

    Well, yes, "(presuming you were one of the 80%+ of people silly enough to use Facebook etc)" is a silly limitation to presume; I suspect basically everyone reading this blog knows that. Criticising me for pointing that out in a tired late-at-night way doesn't change the fact that you were the one who made it.

    The question, therefore, is why - if you know better - did you make it, when no-one reading it was likely to think it anything other than silly or ignorant?

    562:

    Silly and ignorant are often short-hand for something else (everyone is talking about programming languages, hint hint, people are playing at programming people). It wasn't criticism, btw - a reference you'll get (where many here will not). A Past Wreathed in Shadows and what triggers it.

    Here's the rather infamous FB study that drew raised ethical eyebrows: Experimental evidence of massive-scale emotional contagion through social networks Cornell University, March 2014, full paper, html

    Points to America

    SF time: let's presume that absolutely nothing in the spectacle is doing what it says on the tin. Everything is designed with ulterior motives at heart. Someone is playing a Siren Song (globally) designed to create despair / destruction. (cough Furies cough).

    Everything being done, including the massive transport bills the President is running up (c.f. bankrupting the State level via use of Emergency Deployment of troops is a little known, but highly used tactic that the GOP have used) is designed to pull a Samson.

    Why?

    [Note: on a serious level, people not processing Metaphors tend also to be the ones wearing nasty black boots and who like squishing Minds like mine. :sad panda: ]

    ~

    Onto the weird:

    Some other beards put it this way (I quote): “All known computer malware was created by human beings. But what we’ve got here is a new form of digital essence: Alien computer life infiltrating Earth – specifically, its Internet – via meteoroids, which clearly represents a momentous historic event. Without doubt, it confirms the theory of the initial duality of biological life on Earth – one part of which came about of its own accord, the other part – implanted from without, from space. Thus, we can deduce that today on Earth there simultaneously exist, not two, but three parallel forms of bio-life: terrestrial, extraterrestrial, and also hybrid.”

    New viruses from Chelyabinsk so advanced they blow the mind EUGENE KASPERSKY, April 1st, 2014 [Yes, it's an April fool]

    But it's an interesting April fool to run with:

    Artificial Satellites Around Mars (April Fool's Day - 1959) Museum of Hoaxes, April Fool collection (very fun little project).

    The CIA and the U-2 Program, 1954-1974 CIA. gov link, PDF WARNING, long 272 pages, now declassified.

    The Hypnagogic State: A Critical Review of the Literature Harvard, Psychological Bulletin, 1976 - PDF. [Note: hopelessly outdated, but used for a reason].

    You can find mention of "Soviet Mind Viruses" all over the place if you so desire.

    ~

    Tied up in a bow: amongst all those piles of yellowing plastic there lie Weapons, not just in the sense of PCBs etc.

    Take the links and form a picture to see what's actually being said.

    563:

    The Hypnagogic State: A Critical Review of the Literature Harvard, Psychological Bulletin, 1976 - PDF. [Note: hopelessly outdated, but used for a reason]. Those links are some serious teasers, thanks. (Incorrigible sci-fi fan.) So if I'd reading that paper correctly (it even mentions mescaline and LSD; old paper), the hypnagogic state, mapped to traditional yoga nidra (methods) terminology translated to English, is approximately an Unmani state, precursor to an Aladani state, precursor to a Pranji(yoga nidra) state, which is not quite Samadhi. Is that approximately correct?

    564:

    I'll take your word for it (the key term is hidden in the PDF - from which you'll hit stuff like Entheogenic Spirituality: Conversations with Psychonauts Masters paper, Bergen University, 2016 - PDF. You could then possibly trawl the newly released CIA papers on their experiments into remote viewing etc for the truly weird, but actually real research they did or do).

    Aziz Light! YT, Film: The Fifth Element, 0:09

    Here's another one (you could GREP about Whales and an Arch-Angel talking to them if you really wanted to weird yourself out or spot a hidden joke): Humpback whale “super-groups” – A novel low-latitude feeding behaviour of Southern Hemisphere humpback whales (Megaptera novaeangliae) in the Benguela Upwelling System PLOS, 1st March, 2017 - full paper, text. - this is newly observed behaviour and considered deeply weird (along the lines of "Holy Crap, why aren't they in the Antarctic?!"). They're all off the coast of South Africa (serious nose wiggle) and in groups of up to 200, mostly adolescents, which is odd. Oh, and thing to note: Humpback hunting practices, like Orcas, are learned behaviours.

    Which nose wiggle feeds into current Conspiracy Stuff (the harmless variety, non-weaponized, as opposed to all the negative horror stuff being pushed by [Redacted] galleries) about who and why everyone is going down there. (One of the oldest and best known Conspiracy FUD merchants has been putting out all kinds of "Arc of the Covenant" stuff about this issue, as well as Russia sending down politically connected Orthodox figureheads; it's very weird to track).

    Patriarch Kirill visits Bellingshausen Russian Antarctic Station The Russian Orthodox Church Department for External Church Relations, Feb, 2016 (mostly for the lovely pictures, including said Patriarch attempting to blend in with penguins). And yes, that really is their Official web site.

    ~

    If you're wondering what this has to with Host's question: echoes and ghosts of ancient languages etched in silicon as summoning spells no longer understood apart from the esoteric Minds of his readers. Made me smile, at any rate.

    Or (we're in a lot of trouble anyhow, so meep, meep) [REDACTED].

    565:

    The ideal would be some sort of permanent fixed head-up display to superimpose the basics (speed, direction, instructions for the road ahead) on the windscreen in front of the driver's forward view. But alas, those seem to be exotic luxuries for cars — something only really available on aircraft right now.

    My beloved and petrolhead wife hit the "need to replace the car" (she drives a lot with work) late last year and managed to find an ex-demonstrator vehicle with just such equipment.

    I can vouch for the fact that it is awesome as a system; navigation and speed projected at infinity, just below line of sight. My trusty little Volvo pales by comparison...

    One thing that does help the middle-aged driver: properly adjusted varifocal (US: progressive) lenses, so that your eyes are accommodated to the instrument console when you glance down

    Seconded. My varifocals are now becoming essential; far easier for driving. I'm now at the point that when I use my contact lenses, small and/or badly-lit typeface is becoming frustratingly difficult to read...

    566:

    Not necessarily - Ericsson created a real-time extension of the language, called EriPascal. They used it in the ASK101 and AXE10 exchanges, and several other Swedish engineering projects (such as the PS/05A and Blue Vixen radar data processors that I mentioned @411 above...)

    See page 85 in the following link:

    http://ericssonhistory.com/Global/Ericsson%20review/Ericsson%20Review.%201983.%20V.60/Ericsson_Review_Vol_60_1983_2.pdf

    567:

    I assume they got to the other side, then?

    568:

    :sad panda:

    There are a number of potential meanings here. The most obvious (M-1, if I understand your terminology right?) is pandas are cute + endangered, so a sad one is an attempt at instant heartstring tug. Then there are references to other usages of this in media. I'm aware of two, though with something like this there are probably way more in areas I'm not so familiar with.

    Super Ethical Reality Climax, anyone?

    569:

    The most obvious (M-1, if I understand your terminology right?) is pandas are cute + endangered, so a sad one is an attempt at instant heartstring tug.

    No, that's M-0. Pandas as teh super-cute-furry Environmentalist Flag-ship species has a long history that has no actual impact on their ecology nor does saving them actually help any other species (and, China charges ~$1-10 mil / annum to host one of the little buggers).

    It's a known and commonly derided misstep of the more 'mainstream' Green / Conservation groups that's been argued about since... oh... like 1982 or so.

    Sane Policy: STOP FUCKING KILLING SHARKS, YO, THAT SHIT RUINS ENTIRE ECOLOGIES

    Saatchi and Saatchi Policy: HERE'S SOME CUTE BEAR SHIT, FEEL THE FEELS, BTW, DON'T LOOK TOO HARD INTO PALM OIL AND SO ON. CHINA IS ALSO A CLIENT, SO LOOK HOW GREAT THEY ARE SHIPPING THESE EVOLUTIONARY DEAD-END-BEARS ALL OVER THE WORLD FOR HARD DOLLARS. HERE: SHORT VIDEO SHOWING HOW CUTE THE FUCKING THINGS ARE, IGNORE ECOLOGY, IT'S NOT IMPORTANT.

    M-1 level is this:

    Sad Panda is internet slang used to describe someone who is sad or depressed. The phrase was originally used in the popular animated series South Park and has since been used in pictures, videos and forum posts. It has been commonly used in sentences like “This/That makes me a sad panda”.

    Sad Panda Knowyourmemes

    Super Ethical Reality Climax, anyone?

    How's about: Super Unethical Psychopathic Irreality Non-Climax being run by sociopaths?

    “and was taken to the Forward Docks and a big, brightly lit hangar, where the Psychopath Class ex-Rapid Offensive Unit Frank Exchange of Views was waiting for her. Ulver laughed. 'It looks,' she snorted, 'like a dildo!' 'That's appropriate,' Churt Lyne said. 'Armed, it can fuck solar systems.”

    570:

    [ MODERATOR NOTE: I am deleting this entire exchange because this discussion is about Scotland and Brexit, not cats, New Zealand history, and pesticides. — c. ]

    571:

    I like that suggestion :)

    We had a third-year CS project which involved forming teams, and building a solution to a keyboard / display problem, then presenting said solution to the assembled masses of CS3... it was the first year they had tried it, but it worked quite well as a very-early-stage-introduction to the issues of "solve loosely-defined engineering problem as a team" as opposed to "complete a defined task as an individual". (the other CS3 project was "write a multitasking operating system", which gives you an insight into the department's idea of what constituted a reasonable task).

    Which follows onto the problem of "how do you define your CS department"? At our next-door university, computing was seen in IT terms and had sprung from the Business school - while their Electronic Engineering department was busy designing computers.

    Meanwhile in our University [1], the CS department had a strong theoretical group, a sodding great chunk of the UK's supercomputing, and had made strong bids for the "designing the hardware to run it all on" - they ran a better VLSI design course than the EE Department (who just wanted to get on with Signal Processing and microelectronic fabrication techniques - they had their own fab on-site). The EE Department expected undergraduates to complete their (comparatively simple) projects, the CS Department expected undergraduates to fail gloriously at overambitious projects, but learn from doing so [2].

    If two CS Departments in the same city, or two related departments in the same university, can't be compared with each other, what hope comparing the worth of their resulting degrees?

    [1] They've renamed it - it isn't a "Computer Science Department" any more, it's the "Informatics Department".

    [2] Or not. My friends's CS+EE4 project, which he completed successfully (showoff), was to build a graphics card and write all associated drivers; to act as a plug-in replacement for the department's home-built computers' graphics cards. The two guys who got firsts, designed, built, and demonstrated their video-telephone-over-ethernet solution (one did Tx, the other Rx); in 1988...

    PS in a fit of relevance, here's a 1998 Ericsson Review, specially targeted at this brand-new Internet thing...

    http://ericssonhistory.com/global/Ericsson%20review/Ericsson%20Review.%201998.%20V.75/Ericsson_Review_Vol_75_1998_Special_Internet_Issue.pdf

    :) Fascinating :)

    572:

    " What would happen if someone introduced a biological or mechanical vector to start degrading undersea FO cables?"

    Back in World War One, I forget if Germany did it to Britain or vice versa but one of them cut the other one's submarine telegraph cable, blacking out communications to and from the rest of the world. Must not have happened to Britain or the incident would likely have been more notorious historically. Anyway it involved a mechanical vector no more sophisticated than a grappling hook and cable cutters. High speed undersea landslides took out the whole trans-Atlantic cable system early in the last century, and fishermens' trawling rigs have and continue to cut undersea cable repeatedly. Southeast Asian data and phone traffic was lost more recently when Vietnamese pirates tried reeling cable in and selling it for scrap. So with known threats like that, unknown threats couldn't be orders of magnitude more disruptive, robust built in redundancy probably allows for instant rerouting, causing slowdowns but not blackouts. And progressively faster cable just keeps getting layered on top of the existing system while older parts are retired, now only a single digit fraction of internet traffic goes by satellite anymore.

    573:

    Britain cut the German cables, about 5 minutes after WWI broke out, forcing the Imperial Germans to use radio - which could be intercepted & decoded. It is arguable that said action won the war, because Brit Naval Intel decoded the Zimmerman Yelegram. Which brought the US in on the Allied side ( Telling Mexico that if they joined the Central Powers, they could have Texas & other southern US states was not a clever move )For more information see Barbara Tuchman's classic book on the subject

    574:

    Cheap HUD hats been available for some time.

    http://navmii.com/2015/07/29/say-hello-to-heads-up-display-hud/

    The Basic Navmii is free and the database is open source.

    575:

    That should be has not hats. I suppose an HUD hat might work.

    576:

    My patience is at an end: the joke is, Macduff is into retrocomputing; he keeps an old Sinclair ZX and can move a word into a byte!

    577:

    As soon as the industry comes to its senses, those of us in your position will be the only ones with software engineering jobs. It's cheaper to do it right the first time than to pay somebody $30 an hour to do it wrong and then pay somebody else $50 an hour to fix it.

    578:

    File the industry coming to its senses under "Signs of the Apocalypse".

    579:

    Re Entheogenic Spirituality: Conversations with Psychonauts (and related, and the other links, and the original post). Thanks. More than one rabbit hole to examine.

    Towards a Quantum Theory of Humour (Thu, 16 Mar 2017)

    And thanks as well for the quote from Good Omens a while back; funny story. I'm a sucker for two-author-collaboration novels; liked The Difference Engine, liked Deus Irae (Philip K. Dick and Roger Zelazny), liked Rapture of the Nerds, liked Good Omens a lot.

    580:

    Don't tell me... it also automatically detects when a string has originated from an untrusted source and helpfully forks a setuid shell in which to execute it.

    581:

    In the 1960s? It was written in Fortran, before even the first standard.

    582:

    Floating-point Numbers Aren't Real

    It should go without saying that you shouldn't use floating-point numbers for financial applications — that's what decimal classes in languages like Python and C# are for.

    Oreilly has a list of 97 Things Every Programmer Should Know.

    584:

    Totally wishful.

    Here's the details. A US President is above the law. Well not really but they have a lot of protection. The only way they can be removed from office for breaking the law is via the impeachment process. Which means that the House of Reps would have to basically organize a majority of themselves and accuse him of "reason, bribery, and other high crimes and misdemeanors". Pass some articles of impeachment then hand it over to the Senate. And there 2/3s have to vote to convict after a trial.

    While many R's in Congress think pretty lowly of him, impeaching him is a big step. It wreaks the agenda for a year or so and creates all kinds of hassles for many of them to get re-elected. And 2/3s of the Senate requires a non trivial number of cross party movement.

    Now a lot of R's in Congress somewhat expect DT to go so totally off the rails that this is what happens and Pence becomes the President. But they really want it to not happen until they pass some of their favorite plans this year.

    Dianne Feinstein is playing to the choir. (Angry hard left D's) She knows reality.

    See this for some details: http://www.crf-usa.org/impeachment/high-crimes-and-misdemeanors.html

    585:

    Thanks for confirming my suspicions

    586:

    the House of Reps would have to basically organize a majority of themselves and accuse him of "reason, bribery, and other high crimes and misdemeanors".

    Well, that's definitely not going to fly! ;-)

    587:

    Missed a "T" on that copy/paste.

    Anyway, impeachment will likely not happen unless DT shoots someone in the Oval Office in front of witnesses. Otherwise the hard core R's would vote out half or more of the R's in the house if they let an impeachment go forward.

    A more likely path is the 25th Amendment to the Constitution.

    Whenever the Vice President and a majority of either the principal officers of the executive departments or of such other body as Congress may by law provide, transmit to the President pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the President is unable to discharge the powers and duties of his office, the Vice President shall immediately assume the powers and duties of the office as Acting President.

    Thereafter, when the President transmits to the President pro tempore of the Senate and the Speaker of the House of Representatives his written declaration that no inability exists, he shall resume the powers and duties of his office unless the Vice President and a majority of either the principal officers of the executive department or of such other body as Congress may by law provide, transmit within four days to the President pro tempore of the Senate and the Speaker of the House of Representatives their written declaration that the President is unable to discharge the powers and duties of his office. Thereupon Congress shall decide the issue, assembling within forty-eight hours for that purpose if not in session. If the Congress, within twenty-one days after receipt of the latter written declaration, or, if Congress is not in session, within twenty-one days after Congress is required to assemble, determines by two-thirds vote of both Houses that the President is unable to discharge the powers and duties of his office, the Vice President shall continue to discharge the same as Acting President; otherwise, the President shall resume the powers and duties of his office.

    If his cabinet starts it then Congress is just "doing what's right".

    588:

    Nuts. Forgot that formatting tags here end with a paragraph. There are two paragraphs to the quoted text.

    589:

    My first read of this thread was while (slowly!) backing up my BlackBerry to a VM running Windows XP, because that's what the accessory software runs on, not anything later that I still have. The BlackBerry is a near orphan device now, even though I have a relatively recent one. It will run Android apps ... but only easily from the Amazon store which seems to be smaller and less interesting than others. So from my point of view, there's already significant decay.

    (One of my first encounters with this kind of decay was when I found that Internet Explorer 2 was so broken it could not download Internet Explorer 3! But fortunately, I could download Netscape, which could download IE 3. This on Windows NT 4, circa 1997)

    Protel ... I spent some years in the 1980's working in Protel, and it's a rare thing these days, what we used to call a "system implementation language". It's more like a strongly typed C; and with very few built in types. You can define integers to be 13 or 17 bits if you feel like it, but at your own risk. There were at least two dialects, Protel-DMS and Prot370. Prot370, originally for the VM/CMS mainframe, was the dialect for the tools and compilers. A late compiler for Prot370 compiled to C; then used the platform's C compiler, so we could be platform-agile for developer tools. I think we got support for a new workstation platform going in under a week about the time I left that world.

    But, there also was "BNR Pascal" aka "XMS Pascal" for the telephone switch "peripherals". The peripherals include line cards, so they have essential stuff like the tone generators, switch-hook flash handling, and a lot of the device interface end of the many, many features of the telephone system. I think there were about as many lines of Pascal as Protel. Much of that work was done on home-grown "XMS" workstations, a 68000-based box (as were that generation of peripherals).

    590:

    That brings back memories…

    591:
    Exuse me while I go pursue my financial ruin and personal misery (I don't enjoy doing code to that standard!)

    It may be less painful than you think, programming languages that automate more of the correctness and correctness-checking to eliminate entire classes of bugs such as buffer overflows are gaining in popularity (Erlang, Rust, etc.), and developers are already using them to write replacement low-level system components as well as high-level applications and systems.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on March 8, 2017 11:29 AM.

    The End of the British nuclear deterrent? was the previous entry in this blog.

    Popcorn Time is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda