Of course x86-64 can go to big RAM in a single system image, especially with custom system glue. There's nothing weird about x86 there. SGI's UV line goes to 64 TB and 256 sockets and it's built on Xeons. The 64 TB limit is in fact an Intel limit -- currently only 46 bits physical address on Xeon. AMD offers 48 bits physical address but is behind on all other fronts, so I don't know if anyone makes bigger-memory systems with AMD chips.
If memristors eventually deliver it'll be a big change. We'll see. HP has so far slipped and failed to deliver on prior announcements of memristor products. Phase change memory at least has some products shipping but who knows if it will be the next big thing or if it'll fizzle like bubble memory.
]]>But the big point is that Moore's Law is running out of steam. With the way things are going you will run into other hardware or budget limits before you bump up against 64 bit pointer size trying to address RAM, even in 2034.
We can double check John Mashey's prediction against historical data to see that the pace of change is already slowing. SGI's Indigo2 was their first workstation with a 64 bit processor. When it was released in 1994, it could hold up to an astonishing 384 MB of RAM: http://www.typewritten.org/Articles/SGI/indigo2-tm-gd_09-94.pdf
If the 2/3 of a bit per year rule held true over 20 years, we should expect today's high end workstations to hold up to...
2 ^ ((2/3) * 20) * 384 ~= 3963368 MB of RAM. That is, 3870 GB of RAM. There's nothing even close today. I can't find a current workstation that holds more than 512 GB of RAM, though counterexamples would be welcome.
]]>Perhaps this is another motivation for the Anglosphere governments cooperating to ensure that no foreign computer system is ever safe from remote penetration. The collateral damage is that domestic computer systems must remain insecure too, because improved domestic security would spread internationally, and national security types prioritize offense over defense. As I wrote elsewhere recently, both the US Department of Defense and the National Security Agency seem to have permanently confused "Keeping Foreigners in Danger" with "Keeping America Safe."
The US government has evolved elaborate ritual displays of outrage about Chinese hackers stealing secrets from American computers. The actual behavior of law enforcement and security agencies shows that they prefer pervasive information insecurity over pervasive information security. Better that ten domestic systems remain vulnerable to remote exploitation than one foreign system remain secure.
]]>The paper-only books are getting scanned or photographed and uploaded as PDF anyway. I would be really surprised if you could name a scientific/technical book for undergrads that's already been published but not already easily pirated. Even "perfect" DRM would succumb to the analog hole just like paper. Mount camera above reader tablet, snap picture, advance page, repeat.
Actually, I don't understand what the problem is from the publisher POV if your institution is the ebook buyer. Journal publishers like Elsevier strongly dislike pirates but I don't think they actually lose much revenue from them, because they sell overwhelmingly to institutions. Unlike individual students, academic librarians are not going to stop subscribing to a journal because they can find it on torrentz.ru.
So long as the institution commits to buying copies of ebooks proportionate to assigned reading lists, as opposed to delegating that responsibility to students who may copy instead of pay, I don't see how the publisher could lose out. They have already been paid for the product. Whether students use the official copies or unofficial ones afterward doesn't affect their finances.
]]>I assumed that during the Manhattan Project they used electromagnets instead of permanent magnets for calutrons because it was a lot easier to achieve the desired field strength, homogeneity, and stability with resistive electromagnets, rather like how older low field NMR spectrometers used good-sized resistive electromagnets. But of course people have now demonstrated that you can build a good low-field NMR device with modern permanent magnets at much lower capital and operating costs (see "benchtop NMR," "permanent magnet NMR"). The two big keys seem to be higher field strength from samarium cobalt magnets and good temperature control to prevent field strength variation.
Given the advances in electromagnetic modeling, permanent magnet materials, and closed loop control systems since 1942, I wonder if it would be practical to build a permanent magnet calutron today, and if so how its costs per separative work unit might compare with other technologies. Could a calutron also be used to upgrade plutonium from civil waste fuel into weapons-grade Pu-239? There's a smaller mass difference between Pu-239 and 240 than between U-235 and 238, but the total enrichment needed is much lower than going from natural uranium to HEU. The radiation body burden is going to be a lot higher for whoever's assigned to scrape out the calutrons in the plutonium version, but maybe they will have good protective suits. Or not care.
I also wonder if pursuing a modernized calutron, which requires more research trailblazing but perhaps hits fewer materials limits or technology controls than e.g. centrifuges, would be attractive to a would-be proliferator with some resources but not great international ties. Say Myanmar or Uzbekistan.
]]>Manganese nodules and ferromanganese crusts aren't especially rich phosphorus sources but may contain 0.5 to 2% phosphate as P2O5. That's not enough to justify mining on its own but it could be a significant coproduct stream, especially if the nodule processing is under the jurisdiction of nations that have laws against phosphate discharge to prevent eutrophication. You have to capture the phosphate just to avoid water pollution, so might as well sell it.
]]>I think that a lot of this supply chain complexity is optional. It's been driven by decades of optimization for low labor costs and high specialization, with transport costs not a big consideration. Maybe higher transport costs end with advanced electronic products not produced at all; I think it's far more likely that it ends in re-consolidated supply chains. Something more similar to those dark ages of the 1980s when California's Silicon Valley companies literally processed silicon, made physical hardware and components of hardware, even did the final assembly and packaging work with Californian labor. Or something like Shenzhen today. If you have to connect 95% of your supply chain by railway instead of sea, on big land masses that still provides a "localized" production system extending much more than 500 miles.
China has all the raw materials (lithium, steel, aluminum, silicon, rare earths, petrochemicals, gold) needed to make iPhones or other complicated electronics. So does North America. So do most large land masses. For locales that might need to import the more exotic materials to fill holes in their supply chains, like indium or gold, recall again that these materials are required in very small quantities and easy to transport. World production of indium for all purposes -- including hundreds of millions of LCD displays -- is under 700 tonnes per year. Gold use in electronics is under 350 tonnes per year.
]]>Solar radiation management approaches, e.g. artificial sulfate aerosols, are a band-aid at best. They could make things worse if they're used as an excuse to continue burning lots of stuff for a few more decades. A better use (I know, I know -- but let's imagine people can learn) would be to suppress feedback mechanisms while slower but more enduring mitigation measures come online.
One of those measures could be afforestation possibly in combination with conversion of biomass to biochar. But that's a fairly slow mechanism for carbon recapture and biomass eventually oxidizes to CO2 again. Even as char it reenters the atmosphere eventually, though not as fast.
IMO the best long term option is accelerated silicate weathering. The natural geological carbon cycle is that silicates of calcium, magnesium, sodium, and potassium exchange with CO2 dissolved in water to free silica and form carbonates. The geological carbon cycle ultimately gets the final say, but it is slow compared to the biological cycle.
There is more than enough easily accessible basalt in the world to soak up the emissions from all burnable fossil fuels. It would need to be crushed to something resembling coarse sand or fine pebbles and then distributed in near-shore ocean environments where wave agitation will keep it from forming protective crusts that retard ion exchange. Even under those favorable conditions we're talking interventions that operate on the decades-to-centuries scale. But if the outline in The Long Thaw is right, humans will have many centuries in which to regret past emissions and try to reverse their effects. The basalt-crushing solution requires vast scales, as befits the vast scale of the problem, but it is energetically efficient (maybe 10 kilowatt hours per tonne of sequestered CO2), scaleable, handles diffuse as well as point sources of CO2, counteracts both falling ocean pH and rising temperatures, operates fine with intermittent power sources, and the fix endures on geological time scales.
What about the next ice age that humans would prefer to avert? Well, you can make nitrogen trifluoride with late 19th century technology, and it's ~17000 times as potent a GHG as CO2. If people still know how to make electricity they can warm the planet pretty easily; it's cooling that is harder.
]]>However, during that time, there's also been inflation. That $1.25 in 1990 dollars should be $2.09 in 2010 dollars (http://www.usinflationcalculator.com/).
The World Bank's poverty reports use constant dollars and are adjusted by purchasing power parity, so they take into account different costs of living by location and changing values of currency: http://lilt.ilstu.edu/gmklass/pos138/datadisplay/sections/poverty/poverty.htm
I'm not sure why the World Bank doesn't state that plainly in text on their Poverty and Equity Data page, instead asking you to watch a video to get answers to FAQs.
]]>Why do you think heartbleed is less dangerous? It was actually easy to weaponize (have you looked at the sample exploit code?), didn't leave traces in most logging setups even for retrospective detection, and didn't require access to your target's live communication stream.
Oh, and according to reports today the NSA discovered heartbleed two years ago and has been using it: http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html
]]>The world record holder Si PV cell in 1977 was less efficient than the median silicon cell being manufactured today. Fill factors for early modules were also much lower than today. Output per module, per kilogram of structure, and per square meter of space have roughly tripled since dedicated terrestrial PV manufacturing started in 1979. Since 2000, average silicon consumption per peak watt (including processing losses) has fallen by nearly 2/3.
3 years ago people were also saying that there was no more room to squeeze out costs and that prices would go back to 2010 levels after the Chinese capacity surplus exited the market. That has proven deeply incorrect. Maybe you're right that there is no more room for significant cost reduction now -- it has to be true at some point -- but I see enough changes in motion that I don't think it is true yet.
I don't think that the UK or Germany could really go solar-only, even if hardware and installation costs fell another 75%. The seasonal sun imbalance is just too great. But if solar plus short term storage meant that e.g. fossil plants could be idled several months out of the year, and operated fewer hours during the rest of the year, that would still mean deep emission cuts. Fossil fuel plants produce the vast majority of their carbon emissions in operation, not construction. Most of the world's population lives closer to the equator than Germans, so solar's global impact will be greater than in Germany even if that's where it first got big.
A couple of happy examples from sunnier regions:
Austin Energy in Texas just signed a power purchase agreement to take output from 150 MW of PV capacity for 25 years at about $50 per megawatt hour: http://www.bizjournals.com/austin/news/2014/04/01/as-solar-power-gets-cheaper-austin-energy-gains.html
Of course that has the 30% federal tax credit built in, so unsubsidized it would be more like $75 per megawatt hour. If natural gas goes back above $7 / MMBtu then this Texan solar project would be cheaper than new-build gas per megawatt hour even without the tax credit. When will gas go that high again? Don't know, I'm not qualified to tell if fracking is really a flash in the pan or will deliver abundant gas for decades.
The 50 MW San Andres plant in Chile is now operating, selling electricity on the spot market. It was built and is operating with no price guarantees or subsidies: http://www.pv-tech.org/news/financial_close_on_first_merchant_solar_project_in_chile
]]>