There are x86 systems out there which can do more than 4 TB of RAM, in the server space at least; 6 TB in the Dell R920 and SuperMicro 4048-TRFT, probably others. These are both small workgroup servers (5 U of rack space, etc). For non-mainstream-x86 they go to at least 32 TB.
Regarding Moore's law and the end of transistors and such; we have a number of strange changes coming. Memristors, phase-change RAM, quantum computing, truly massive parallelism reaching down out of computer science R&D into practical problems. We could already "do away with disks" putting FLASH on the RAM bus - but it wears out quickly enough that that's probably a mistake. Memristors and PCRAM don't. A lot of system stuff is designed around there being Register-Cache-Cache-Cache-RAM-(bus)-Disk-Network speed heirarchies of access to data. It may well flatten to Register-Cache-Cache-Cache-RAM/Persistent Fast Storage-(bus)-Network. Everything has been a file, but may not be shortly.
]]>Of course x86-64 can go to big RAM in a single system image, especially with custom system glue. There's nothing weird about x86 there. SGI's UV line goes to 64 TB and 256 sockets and it's built on Xeons. The 64 TB limit is in fact an Intel limit -- currently only 46 bits physical address on Xeon. AMD offers 48 bits physical address but is behind on all other fronts, so I don't know if anyone makes bigger-memory systems with AMD chips.
If memristors eventually deliver it'll be a big change. We'll see. HP has so far slipped and failed to deliver on prior announcements of memristor products. Phase change memory at least has some products shipping but who knows if it will be the next big thing or if it'll fizzle like bubble memory.
]]>"Rarely is the question asked: Is our children learning?" - G.W. Bush
I've been in situations where I found I was not so much teaching something to someone as emitting words and teaching something at someone. That's a human layer problem, though; it's probably not subject to a hardware patch.
]]>Three-valued logic has some nice enough properties, computationally speaking that people have been trying to implement this one for a long time. I say try, because no one has been able to do trinary despite being very clever, very determined, and having access to a lot of resources.
Not so. In fact, R & D work way back in the 1950s came up with completely successful ternary circuitry based on parametrons (non-linear oscillators pumped by three phase AC). It was just that work on conventional approaches had moved on by then and overtaken the state of the art requirements that the parametron project had originally been given, plus there was already a material installed base of conventional approach stuff, so it missed the boat just like selectron memory did with ferrite cores. It was a case of being overtaken by events, a bit like the Rolls-Royce Crecy engine being ready just in time for gas turbines (though there it was the disruptive technology that got there first).
So the pink starfish might well be using ternary logic, if their own path dependences led them that way.
]]>Even worse: the cognitive workload of programming requires that the programmer, in order to be anything other than a cut-and-paste monkey with a text editor, has to understand at the least a handful of key abstractions: variables, looping, and indirection operators (pointers).
Some functional languages don't need no steenking variables, looping, or indirection operators (I even worked up one myself, Furphy, using the Forth virtual machine as a base - and why hasn't anyone else brought Forth up yet?). Of course, they need other stuff that programmers have to wrap their heads around in order to achieve anything non-trivial, e.g. recursion, indirect function calls provided via parameters, lazy versus eager evaluation, ...
]]>You forgot 'successful' tristate hardware developed in the 60's, 70's, etc. All of which could make the same claim about being outpaced by conventional approaches. IOW your claim is unfalsifiable, or at the very least needs a lot more work before it becomes tenable.
Same thing for stuff like the memristor, which also has been coming Real Soon Now for decades. Note that I'm not flatly rejecting claims of this sort; it's just that after spans approaching half a century of relentless hyping which never panned out, I've been burned enough times to be somewhat skeptical of them. Note also that I would be very happy to be proven wrong (I'd love to be wrong about aneutronic table-top fusion, for example), but there's a rather short distance separating realistic expectations and delusions. So until proven otherwise, I'm going to go with binary as the dominant computing paradigm in the universe. (And why not? These are after all just differing implementations of the universal Turing machine.)
]]>Well, yes.
It's not as if someone playing with electronics - for pure research or just as a hobby - couldn't make a trinary logic circuit now. But there hasn't been a commercial application for a long time. Imagine a trinary based computer system with twenty years of development behind it, from hand wired components to a full integrated circuit chip; such a thing might be as capable as an 8086. The famous Intel 8086 did have decades of expensive R&D between it and ENIAC. It's also long obsolete now.
To be sure, someone trying to build a Turing-complete computational device with trinary logic in the WWII era would have to become clever with the memory storage system. For example, a Williams Tube only worked in binary though they could store over a thousand bits for as long as the power lasted. On the other hand, drum memory was patented back in 1932 and could in theory be coded with +, 0, and - charges.
The window in which an N-state logical base could be a reasonable contender was very small. It took a bit longer for designers to stop fiddling with word length and adopt the 8-bit byte. You can no doubt supply your own examples of technology lock-in.
]]>Fred Brooks said one of the things he was most proud of was winning the debate on the IBM 360 project to use 8 bits instead of 6 for byte length. I think he said Gene Amdahl was on the other side of the debate.
]]>Waitaminute, you're sneaking in the assumption that trinary implementations are essentially no better than binary. Yeah, I'll grant you that the development 50 Hz vs 60 Hz networks demonstrates strong path dependencies -- because at the end of the day there are no strong reasons to prefer one over the other. And that's most definitely not what was claimed by the advocates for trinary logic. Now if you're right, the 'successful' development in the 50's should have been a one-off (as I've already pointed out.) Instead, people have kepg trying to do full implementations not just in the 50's, but also the 60's, 70's, and 80's (anyone remember fluidics?)
Why is this the case if -- per your claim -- there really isn't much of an advantage to trinary logic? Were they all just plain wrong?
]]>i did that last night, so no, it doesn't. Why would you load it at 1000Mb/sec? Of course my system is a distributed cluster, but HP just announced this
]]>C++ may be reducing the prevalence of this, as C++ programmers are generally aware that "pointer to function" and "pointer to member function" are not interchangeable.
]]>What I suspect is going to happen, though, that once Moore's law - presently the cheapest way to add more computing power - dies out, the decrease in costs of computation (both the capital costs and the power costs) is going to hit rapidly diminishing returns. There's a certain feedback here - now that people are constantly replacing their hardware with newer hardware, there's a lot of money in hardware manufacturing, but once people stop doing that, there's less money, there's fewer smart people working on advancements, and the improvements slowly grind to a halt.
]]>