Back to: Mercury, Retrograde | Forward to: Why I don't use the iPad for serious writing

Tanenbaum's Law v. the Fermi Paradox

Tanenbaum's Law (attributed to Professor Andrew S. Tanenbaum) is flippantly expressed as, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway". It's a profound insight into the state of networking technology: our ability to move bits from a to b is very tightly constrained in comparison with our ability to move atoms, because we have lots of atoms and they take relatively little energy to set in motion.

Which leads me to ask the following question:

1. Postulate that we make contact in the near future with an extra-terrestrial intelligence ten light-years away.

2. We can communicate with them (and want to communicate with them) by two means: we can beam data at them via laser, or we can send a physical data package (a "tape" travelling cross-country).

3. Our "tape" package will be made of something approximating the properties of memory diamond, i.e. on the order of 1022 bits/gram.

4. We will assume that we can use a laser-pumped light sail (with laser efficiency of 10%) to transfer momentum directly to a hunk of memory diamond. We're going to ignore the sail mass, to keep things simple. And we're going to assume there's another laser at the other end to allow our alien friends to decelerate it (so if you need xGj/gram to reach a specific speed, we can allow for 2x Gj/gram for the trip).

5. Our reference interstellar comms laser, for an energy input of 1GW, will be able to deliver 2.6 billion photons/second to a suitable receiver 10 light years away, while switching at 1Hz. If we increase the bit rate we decrease the number of photons per bit, so this channel probably limits out at significantly less than 1Gbit/sec (probably by several orders of magnitude). I'm going to arbitrarily declare that for starting purposes our hyper-sensitive detectors need 1000 photons to be sure of a bit (including error correction), so we can shift 2.6mb/sec using a 1Gw laser.

6. Ignoring latency (it will be one year per light year for lasers, higher for physical payloads), which is the most energetically efficient way to transfer data, and for a given energy input, how much data can we transfer per channel?

Here's my initial stab at it, which is probably wrong (because it's a Saturday night, I've been working for the past nine hours or so, and I'm fried):

Let's pick a 10 year time-frame first. 10 years = 315,576,000 seconds.

Laser:

Running a laser for 10 years will emit 3.56 x 1017 joules in that time, at 10% efficiency, so roughly 3.56 x 1018 joules of energy is consumed. It will deliver 0.82 x 1014 bits of data. So, roughly 4000 joules/bit.

"Tape":

A packet of memory diamond with a capacity of 1 x 1014 bits has a mass of roughly 10-8 grams.

Kinetic energy of 10-8g travelling at 10% of c (30,000 km/sec, 30,000,000 m/s) = (10-8 * 30,000,0002)/2 = 0.9 * 106 J. Double the energy for deceleration and we still have 2 x 106 joules, to move 1014 bits. So, roughly 108 bits/joule.

Eh?

Let's dink with the variables a bit. Even if we allow individual photons to count as bits at 10 light years' range, our laser still maxes out at around 4 joules/bit. And even if we allow for a 10,000:1 mass ratio for our data-carrying starwisp, and impose the same 10% efficiency on its launch laser's energy conversion as on our communication laser, we get 1,000 bits/joule out of it.

As long as we ignore latency/speed issues, it looks to me as if Tanenbaum's Law implies a huge win for physical interstellar comms over signalling. Which in turn might imply an answer to the SETI silence aspect of the Fermi paradox.

Of course, this is just an idle back-of-the-envelope amusement and I've probably borked my calculations somewhere. Haven't I?

217 Comments

1:

Kinetic energy of 10-8g travelling at 10% of c (30,000 km/sec, 30,000,000 m/s) = (10-8 * 30,000,0002)/2 = 0.9 * 106 J.

1e-8 * (3e7)^2 = 1e-8 * 9e14 = 9e6. You're low by a factor of 10.

Which doesn't change a lot.

2:

Well, and there was the /2, so 4.5e6, doubled for deceleration back to 9e6.

Mass of the solar sail probably shouldn't be ignored. Also, solar sails are maximally efficient in reaction mass, therefore must be expensive in energy, so the total energy used is probably a lot more than the kinetic energy of the payload at cruising speed. You want to play with E=p*c.

3:

I've got the flu, so I'm not even going to think about checking your math, but I think you've got the right idea barring a really incredible improvement in compression protocols.*

The corollary is that bulk/unimportant messages go by memory diamond, and high-priority messages go by laser (they only take ten years to get there.) The economics of pricing, particularly with multiple destinations at multiple distances, which also imply multiple bandwidths, get interesting, however.

So imagine a price structure that looks like this:

Uncompressed data sent by message laser: INSANELY EXPENSIVE, but very easy to decode on the other side.

Data sent with ordinary compression by message laser: Incredibly expensive, but once again, fairly easy to access on the other side. It does, however, require some processing power.

Data sent with high-powered, new compression alogrithm by message laser: Very expensive, but it takes a long time - months/years to decompress. However, the timing still beats shipping computonium from star to star.

After that comes bulk data at .1 c (or whatever.)

  • Every piece of data in a computer is essentially a long hexadecimal number. Theoretically, there's an equation which creates that number which might be much shorter than the data itself. I forget what that particular bit of math is called. This implies the possibility of much better compression than we currently have available. Since we're trying to beat something traveling light years at .1 c we don't care if it takes years to encode/decode a message.
4:

Well, it's the usual bandwidth vs latency thing. You're assuming that only bandwidth is interesting.

5:

Finally, if the target is 10 ly away, then the laser will reach them in 10 years. The diamond won't. Ignoring (which we probably shouldn't) the acceleration phase, the 0.1c diamond would take 100 years to reach the target, and the laser would convey 10x as much information. For 10x as much energy of course, so the Joules/bit wouldn't change, but the comparative bandwidths would, and we'd see that communicating faster takes more energy, which would be duuuh true if we were sending diamond packets at different speeds, not too surprising if a laser takes more energy than snail mail.

Ah, you said "As long as we ignore latency/speed issues"

Well yes, but that's a big thing to ignore. You'd get even more bits per joule if you sent stuff at 10 km/s.

6:

I don't see how compression changes anything, because it would reduce the number of bits needed for the physical-media route...

7:

Compression doesn't change much when you go the physical media route and you're shipping terabyte2. But given the expense of sending data by message laser, compression becomes essential.

8:

A lot of the math depends on that memory diamond.

I'm tangled up in the middle of other stuff, so I don't have time to calculate how big something would have to be to destroy that message with a strike at 0.1 c. However, if it's on the scale of a hydrogen atom (my first guess), I don't think it's a very safe way to send a message.

In any case, I'll give you the post Peak Oil scenario for how this could be used on Earth. When we get to the point where we can't afford to replace communications satellites, we'll instead repurpose our ICBM fleet (downscaled, of course, although bigger than the LDRS models), to ship memory crystals between continents. In the Pacific, we can use the island of Hawai'i as the staging point. Memory rockets would rain down from both the US and China on an hourly basis, and the Hawaiians would clean up by retrieving the payloads, rehousing them in new rockets that use fuel cracked from the recycled plastic of the North Pacific Garbage Patch, and blast them on their way. Call it the street finding its own uses for Cold War missile tech, and keep one's tongue firmly in one's cheek.

Heck, if memory diamond is that good, I may get to the point where I can use a hunting rifle to send documents to the local bureaucrats. All I have to do is pack the bullet with the information, and fire it really accurately at the County office building. What could possibly go wrong? They could send documents back to me the same way. Wouldn't that be fun, especially if I missed the bureaucrats and, say, hit the waiting room instead?

9:

Anyone who has ever needed to transmit 1TB of data in the states knows that the US Postal Service has the best bandwidth and packet sizes for sending data. The latency is a killer, though!

Now I want to play Civ via snail-mail.

10:

The memory unit itself (weighting 1e-8g, 10 nanograms) assuming it has a density around that of normal diamond (~3.5g/ml) would be less than half the size of a red blood cell. The mass of the starwisp craft itself would need to be much heavier than the cargo. In fact, its the mass of the cargo which is negligible.

Also I like this idea, but I think you're cheating a bit with the memory diamond concept, its something which is attractive as an ultra-high density medium, but its manufacture and utility is speculative. If you're going to speculate about the properties of memory diamond, why not speculate about the bandwidth of the laser? I mean, you're assuming we use only a single monochromatic laser? Wouldn't a multichannel system be better? Could the receiver be more sensitive? Would some sort of modulated neutrino beam be better? Who knows. You're leaning relatively conservatively on the laser side and relatively liberally on the "tape" side.

I'd say stick with what you're doing, but be fair(er), imagine the mass if it we were talking about the zettabyte flash chips of the 2030's.

If you're interested in theoretical maximums (if memory diamond is close to the limit of matter based storage, which I doubt it is, why just stick with one element with only two stable isotopes?) you'd need to go way farther out into theoretical physics/information science land.

11:

Generalize it to a millennium rather than a decade and the position gets more favourable to our physical transfer.

You launch a memory diamond starwisp every decade; that means 90% of the data gets to its destination within the communication period (the rest is held up by latency of a century -- 10 ly at 0.1 c). The comparable laser communicator gets 99% of the data through, but at multiple orders of magnitude higher cost.

The longer the continuous period we're looking at, the narrower the gap imposed by differential latency between the two channels.

12:

The number of channels on the laser is irrelevant, since we're talking about bits per joule. All you're suggesting is making a laser that can transmit more photons at the same time- the cost per photon hasn't gone down. And a laser with 10% delivery efficiency across interstellar distances is equally as fantastic as a memory diamond, if not more so. I think it's a wash.

13:

Cost is a different issue though. The cost to product energy is not simply related. The power for the laser (according to Wolfram Alpha) for 10 years, is about 25% of the US's electrical output for 2001. So we would need a 2% increase per year in energy production? That doesn't seem like a lot.

When you talk about cost, you have to factor in the most expensive thing, which is human time. Is it worth the effort to build, launch, and monitor starcraft, or just send out a laser? I have colleagues who do genomics research, and they sometimes spend days transferring large amounts of data around. I've asked them about mailing hard drives or DVD's, which would be cheaper, and not much slower, but they say "Who has time for that? I'd rather just start the transfer and forget about it. It takes less time to scp a file and let it go than it would be to put a stamp on a box."

14:

The latency/bandwidth give and take is rather obvious. You can see it even in the price differential between air and surface delivery: air deliveries are a small fraction of all international deliveries because they cost more, but they're still used because latency is important in a lot of cases. I'd risk and guess that laser will be used for small important packets(setting up the delivery schedule, "emergency" messages, etc) while the starwisps will be used for the bigger and heavier data deliveries("we got your research base on plasma, here is our research base on magnetic gates to date, updates to follow each decade on receipt of your own updates").

Another thing you forgot is that you need some kind of backup for the starwisp data, which you don't for the laser:it can be anything, as long as it makes sure the receiving civilization finishes up with at least one whole and well corroborated copy of the data you sent. with a century's worth of latency you really don't want to keep them waiting.

15:

If you don’t have the requisite breakthrough in data compression schemes you can still use the starwisps to deliver a code book of phrases. Similar to the telegraphic shipping codes of steamship times, where XEBEC could mean "Accepting offer, please send immediately".

16:

Very large packet error correction is available in both cases. So I'll take 4 J/bit for the laser for now.

Mail: 9e7 J/1e14 bits, call it 1e8 J/1e14 bits, or 1e6 bits per Joule. As you note, the propulsion laser is also 10% efficient, so 1e5 bits per Joule. There's also whether the diamond is robust in the face of 100 years of 0.1c radiation, but if I'm granting single-photon bits with error correction I'll pass over that.

Propulsion energy: momentum p of the diamond is 1e-8

wait, the kinetic energy was wrong. It's 1e22 bits per gram, so 1e-8 grams, yes, but KE takes kilograms. 1e-11 kg * 9e14 m2/s2 = 9e3 Joules, or 1e4, so 1e10 bits/Joule. Then 1e9 bits/Joule after inefficiency.

momentum p is 1e-11 kg * 3e7 m/s = 3e-4. On the photon side, E=pc, so E = 3e-4 * 3e8 = 9e4 J. Another factor of 10, so 1e8 bits/J.

Memory says that the high end for solar sail acceleration is 0.01 m/s^2, v=at, so t = 3e7/.01 = 3e9 seconds or 30 years to accelerate to cruising speed, in which time it would have traveled at^2/2 = .019e18/2 = 4.5e16 meters, or about a light year. Is the propulsion laser good for that range or will it need an upgrade and more power? Also travel time will be 30 years for first lightyear, 30 for the last, and 80 years for the cruise, 140 years total.

Optimally the memory packet is the solar sail, though balancing that against not being disrupted by laser light and cruise radiation.

17:

The power for the laser (according to Wolfram Alpha) for 10 years, is about 25% of the US's electrical output for 2001. So we would need a 2% increase per year in energy production?

Er, no: because you only use the energy in real time. Run the laser for ten years: generate the power over a decade. What we're asking for is 10Gw of continuous capacity, or about 10% of the UK's generating capacity -- about equivalent to two giant coal plants the size of Drax B (5Gw electrical output!), or about eight-ten PWRs.

18:

Memory says that the high end for solar sail

Not solar sail; laser sail. Or microwave sail. IIRC, from the 100 Year Starship conference the proof-of-concept acceleration for a microwave sail testbed was on the order of 100g.

19:

Compression \ error correction would be a given for both transfer mechanisms. Error correction is pretty much a done deal (see Turbo Codes\LDPC and the Shannon Limit). Similar concepts can be used to protect the hyper-diamond by chunking the message into separate blocks and applying error correction to deal with loss of (some) blocks. Once you know what the loss rate of the starwisps will be you can just send enough separate units, encoded appropriately, to deal with that loss - it's the same as dealing with noise on a transmitted signal.

As an aside I found that older post very interesting. You had this to say about information storage density:

"Today, I can pick up about 1Gb of FLASH memory in a postage stamp sized card for that much money. fast-forward a decade and that'll be 100Gb. Two decades and we'll be up to 10Tb."

You wrote that in May 2007, 128GB SDXC cards came out at the start of this year. Hard drive capacity growth has slowed a bit - they were approaching 1TB at that point and have just hit 4TB. Cost has dropped though, 10TB of capacity is now about 10% of what it was then (~£500 vs £5k in mid 2007). The future happens faster than even you think, which is nice to know. :)

20:

You've borked the physics of light sails. They transfer momentum efficiently, but you've assumed that all the energy in the laser is converted into kinetic energy in the starship -- not so. Momentum and energy for photons is related by E = pc where p is momentum, so your approach is out by a factor of c, about 3 x 10^8 m/s.

21:

Hmmm... why memory diamond? When graphite is slightly more thermodynamically stable at room temperature and pressure or below, plus it has the really handy property that you can peel it into sheets of graphene, which I should think would possibly be amenable to reading the pattern of C12 / C13 isotopes using an atomic force microscope or similar. C12-C12 bonds should be slightly longer than C13-C13 bonds and vibrational frequencies will be slightly higher.

Or you could go bizarrely retro and simply punch holes in the graphene sheet to represent the bits. That's perhaps a factor of 20 or so less dense than Charlies' concept using isotopic labelling, but vaguely feasible to create with something not very far removed from current technology. Paper tape on an atomic scale...

22:

Once you hit a certain distance threshold, the latency differences between laser and mail become less significant- especially because once your round-trip latency exceeds the lifetime of the originating entity, it ceases to matter. In our society, the half-life for a corporation is about 8 years- so if we were communicating with entities in Alpha Centauri, even at light-speed, 50% of the time the corporation wouldn't be around to receive their reply.

We can assume that any interstellar civilization is going to have social structures that last much longer than ours, but still- you're going to hit points where the latency ceases to matter, or where you have to accept that you're communicating on a level that spans entities.

23:

Remembering how paranoid some of the denizens are here, can I point out the beautiful absurdity of message missiles and message laser?

"No, you idiot, I wasn't trying to bomb your planet, I was trying to send you a copy of the Encyclopedia Galatica with instructions for planetary peace and interplanetary governance. It wasn't supposed to take out your space station. Really. Now turn off that terawatt message laser please, before it fries our launch facility so that we can try again? Okay?"

Interstellar war or peaceful contact. What if you can't tell the difference?

24:

Actually, I was thinking more about how we respond to attachments from unknown sources today. Your filter would be a high-powered point-defense laser, and your blacklist would be sending a planet-buster towards the message's origin.

25:

Memory Diamond? In deep space with the certainty of high-energy particle/gamma impacts, which will screw with the accuracy/credibility/redundancy/readability of the info-package contained within said diamond? Um.

26:

I'm not sure this has much impact on the Fermi paradox simply because announcing your existence is a different problem. These are for mass data transfer between two endpoints, but for announcing your existence you want to broadcast one bit ("SYN?") to everybody. The best proposals are to use a giant microwave beacon on the 21-cm hydrogen line and assume that the listener couldn't possibly interpret it as a natural phenomenon.

If I may digress, a better explanation is the Killing Star scenario: that same starwisp, if not caught properly, is a small relativistic weapon that will flatten any space station it hits.

27:

That's a tough one. And what is an entity? For all that businesses don't tend to last, the oldest corporations in the world are more than a thousand years old (and the three very oldest are hotels.)

This leads to a very interesting form of investing. You set up a corporation, of which you are a sole creditor. The corporation sends a message to Tau Ceti, and the potential for a reply is the corporation's only asset. The company then declares bankruptcy, leaving you the sole owner of the message-asset, which you leave to your children, who then borrow against it. Having paid back the loan, they leave it to their children who then sell it on the appropriate exchange...

So you have a brief boom each time a laser sail full of messages go out, then another, longer boom when the replies come in. If the people on the other planet want to make economic war against you, then can send false information and you'll have a bust.

Interesting.

28:
Once you hit a certain distance threshold, the latency differences between laser and mail become less significant- especially because once your round-trip latency exceeds the lifetime of the originating entity, it ceases to matter.

As is usually the case for these sorts of things, you've got to consider the entire mission. And in this case, you have to consider not only the amount of data being sent, but the type of data.

Specifically: at the low end, if your data is just the interstellar equivalent of 130-character tweets, or spam, or 240p pron, you can go with the analysis as is. But suppose you're transmitting some sort of intelligent agent that can operate autonomously once it's been downloaded into the proper substrate. In that case, I'd imagine half a message or even 90% of the whole message carries just as much content as 0% of the message; intelligence seems to be a rather delicate thing.

So how large a message do you need to encode such an agent? Going with Charlie's figures, 10^14 bits seems a bit on the low side. So assuming you need to deliver at least that amount for anything that's useful, memory diamond wins hands-down (I'd WAG anywhere from 10^15 to 10^18 bits for transmitting a viable human-level intelligence). And if the minimum size of the message is large enough this is true even if you weight latency higher than bandwidth.

29:

The first starwisp sent to a candidate civilization would have to slow down well outside the solar system, and on a course which made it obvious that no hostility was meant.

30:

This may be why we haven't been contacted yet. We've been broadcasting in various radio wavelengths for around 100 years. If our messages are coming from more than 10-15 light years away, they're still several light-months/years out and decelerating on a course that won't make us nervous.

31:

The Voyager probes have 3.7m antennas with a 22W transmitter, reaching 1.4kbit/s. Receiving antennas are 34m in diameter. They are 1/7000th of the 10ly yardstick away from us, which would drop bandwidth by a factor of 50mio - assuming a perfectly transparent medium.

If we assume that Arecibo is the largest practical size of a radio transmitter and receiver, then we can increase performance by a factor of a million and achieve a bandwidth similar to a hand-tapped morse code, using no more than Voyager's 22W transmitter. 1.4kbit/s would require the power of your typical microwave magnetron - 1kW.

Funny, you need about 1 Joule per bit, using an ordinary radio transmitter. Less, if you build a bigger antenna (in orbit?). How hard can that be?

32:

Even assuming a fairly stately 1G of acceleration for the light-sail, it will only take approx 353 days to reach 10% of light speed. If (wild speculation) you can use the same laser for both cases, why not do both, i.e. Use the laser to accelerate the physical package for a year, then switch it over to direct laser messaging. This means that the laser will be running for closer to the full ten years instead of just one, but provides both high bandwidth/high latency (physical) and low bandwidth/low latency (laser) channels. The laser has to be built either way and, once the physical packet is up to speed, energy usage (message traffic) can be adjusted to meet efficiency requirements.

33:

The actual form of the data redundancy / error correction is somewhat interesting here.

Both methods of communication suffer from tremendous latency, of a sort that we don't generally deal with today. With your typical communications medium now, you use signalling design to get the error rate down to an acceptable level, and then you use error correcting codes and redundancy to achieve the desired data integrity on top of that.

Once in a while, of course, there'll be a glitch, and we either resort back to the good old send-it-again approach, or (for broadcasts like TV) we just have to deal with the information glitching out sometimes.

With interstellar comms at the 10 ly distance, sending the data again might take a minimum of 110 years (you send the NAK by laser -- 20 years if you ask for the data by laser too). But it's likely worse than that, since your data payloads are likely sent on schedule, and scheduling an extra launch may be expensive/impossible. Breaking the payload up into packets sent at the same time seems like it's probably impractical too, to some extent -- will you really operate multiple laser sail systems for this purpose?

A more useful mechanism is to encode redundancy into the transmissions, so that if one of the payloads is lost, you just have to wait until the next scheduled delivery rather than ask again.

For example, if you launched a data package once every year, then the simplest method would be to send twice as much memory crystal as needed: half to hold this year's message, and half to hold last year's. You can also, of course, prioritize based on importance of the data etc., and send more than twice.

This is not the most efficient mechanism, though. Instead of doubling the storage, you could pick a certain level of overhead and use a more advanced encoding. To see how this could work, let's consider a simple case:

Assume that losing more than one craft per decade is very unlikely. Every decade, we send 9 payloads of data, and one payload of error correction (if you've ever looked at ECC codes before, you'll notice this is exactly how RAID 5 works). Now, if any one of those payloads are lost, we just have to wait a maximum of 9 years until the recovery codes show up. That's more than twice as fast as sending a message by laser asking for the data again.

More complex schemes are easy as well. Of course, this is just considering the possibility of losing a whole starwisp. You'll also probably constantly lose data due to radiation -- which can also be detected/fixed using this sort of method.

But now think about all this long distance comm from a social angle. If you assume both the laser and a bulk comm system, people are going to have some inkling of what's coming almost a century in advance. For the next 90 years, there's going to be this huge data manifest sitting in an archive somewhere, and people will wonder what's really behind the abstracts.

When it starts getting to be near the date when your package arrives, people will probably even study up on the contents. Degrees will be issued. Experts will be waiting and ready!

Finally, the long awaited day arrives. The telescopes lock onto the inbound lightsail. The deceleration lasers fire up. And... disaster! The sail disintegrates, and all that memory crystal flies off into the void without slowing down.

Now said experts are going to be clamoring to get the data as fast as possible. Ok... ok... Nine years until the next correction data? What else are they supposed to do now?! And what if another ship is lost, oh the humanity! Then it would be 30 years until the data could arrive by laser at the very minimum! If we ask now though, it would only be 20 years...

So in short, it seems to me that if we assume that a lot of this data is rather important, use of these comm systems will always be very political. And then you can consider even longer distances...

34:

As others have noted, the interstellar medium is the problem.

This density of this medium ~ 1 atoms/cm^-3 Your 10^-8 gm of memory diamond ~= 10^-7 cm^3. Being generous, that is a object with a frontal area of 10^-6 cm^3

This implies that this object will hit 1 hydrogen atoms every 10km it travels.

At 0.1c, (30,000 km s-1) that would imply 3000 atoms hitting the diamond every second at terminal velocity.

If my calcs are correct, the diamond will be destroyed before it has gone anywhere. So you are going to need to shield it, and the mass to do this will be how much?

35:

That's the "yes but" problem. Braking mechanisms fail pretty regularly, as demonstrated by our efforts to explore Mars.

Let's not forget that we're sending message with military-grade energy levels. Even if we intend to do it peacefully, "oops" doesn't begin to cover the potential consequences of problems with this communications system.

If nothing else, do we want to be the target for a return message?

I'm only half-joking about this as a conversation via rifle bullet. I can presumably design and build a message bullet that will deploy wings, despin, and slow itself down and to fly to my correspondent at a nonlethal speed. Unfortunately, it only takes one failure to really kill the conversation.

36:

It seems to me that you might be able to generate interest in a conversation with a 20 year cycle time, but not if the cycle is 200 years.

Also, the memory diamond package has to survive the radiation, relativistic dust, and micrometeorites over the course of the trip. It would only take a pebble (at 30,000,000 m/s impact speed, i.e. 0.1c) to interrupt the conversation, and we wouldn't even find out for 200 years. I'd vote laser, regardless of bandwidth.

37:

Actually, going with data compression as an angle, if we assume that both slow bulk data and fast expensive data is available... What is the economic value of getting old, rich data you already know about?

To give a simple example, with the laser comm system you might end up sending a really grungy sub-youtube video of interesting cultural artifacts. A hundred years later, the 3d HD version shows up, with color commentary.

Would anyone care?

The issue if you look at human civilization currently is that cultural information seems to be very in-the-moment. A movie appears, sometimes it has some lasting value, but the value rapidly goes to near zero as soon as people move on to the next thing. But you can almost always just pick a level of lossy compression where the functional information is retained, but the actual data usage is low.

As another example, a scientific paper arrives via high-speed, but the actual analysis data is bulk. Does anyone really care about century-old source data? If it was important, the paper should have had enough information to run your own experiment.

If you're looking at an interstellar civilization, the data you're getting is always going to be significantly delayed (and any time you talk to someone on the other end, the delay is twice as much). However, since everything is delayed, you won't really see it as delayed when you look at it: it's hot off the telescope.

So perhaps people will look at things in these terms -- there's a vibrant local culture that they're on top of, and then there's an "other" culture that arrives over the laser. You keep a mental image of current events going on "over there" and you put things that show up via the high-speed in that perspective. A movie makes sense because you're receiving the news it was made with at the same time.

What then is the point of the low speed bulk data? It doesn't arrive "now" from a cultural point of view. You already saw that movie -- just in terrible quality. Or you saw a lot of other movies from that era, but this movie wasn't as popular so it got shunted to the bulk transport. Now it's ancient history. Sure, maybe you'd like a copy for the library -- but nobody is really all that interested.

What sort of data is it where the time/space tradeoff justifies such a long relative delay? Note that it's always relative here, as long as the high-speed link is reasonably available. You get the really interesting stuff at the speed of light. A thousand light years out, you'll be receiving the laser-based information "as it happens" compared to the bulk data, even though in terms of actual two-way communication either method is absurd.

Maybe the real point of bulk data is colonists: like scentofviolets said, a colonist is a huge chunk of data. When the colonist gets activated in some way, she will be in a very alien environment regardless, with little chance of talking to home. If you're going to give transmitting your mental state to another star a shot, it's basically a wild gamble. Does it really matter so much which century you wake up in?

Meanwhile, an official emissary might be much more time-sensitive. If the data requirements can be met at all, sending such a person over the high-speed link would make much more sense.

38:
It seems to me that you might be able to generate interest in a conversation with a 20 year cycle time, but not if the cycle is 200 years.

You're assuming that we're dealing with modern humans, where 20 years is a long time, but you could get a few emails in before you die. With an organization, perhaps such a conversation could be maintained over many generations.

This would most likely not be the case for a real interstellar civilization though. Individuals might live for thousands or millions of years, so a conversation with 200 year latency would be totally doable.

Even more interesting, if you're able to shut your mind down (eg. if you're living in a computer), or have multiple mental tracks running at different speeds, then you might not experience any delay in such a conversation. You'd just see yourself taking less part in local civilization while it occurs.

39:

What you need is a laser using this tech: orbital angular momentum (OAM), a twisting pattern in the beam 270 megabits per second between the two planets

Versus what we use now:

pulse-position modulation (PPM) up to 64 symbols per helical shape

http://news.sciencemag.org/sciencenow/2011/07/a-high-bandwidth-interplanetary-.html?ref=hp

Downside appears the size of the receiver, though he disputes the need for kilometer size, stating polarization is the key..

I was trying to figure out a way to polarize the sun side facing your target (years off in time so lead your target accordingly). Say sheets of electrically controlled polarizing material at geo-sync (gotta only communicate with those co-planar to the suns rotation!), change the polarization to communicate... would have to figure out the bandwidth however.

40:

Truckloads of tapes would really help out when you're the root of a billion-node SRP tree.

41:

Well, that didn't work out so well... PPM --> 270mbits versus OAM --> 64 symbols... several sites list it as 100x so in the 10's of gbits range

42:

And if we find (next year) that neutrinos really are FTL, everyone's calculations and Fermi Paradox assumptions are utterly fucked. I'll wait before speculating on bandwidths etc

43:

I remember, but can't spell a SF book on this. The O-something HOTLINE had data going by that could be fished out and used. But you had to add to it.

44:
What you need is a laser using this tech: orbital angular momentum (OAM), a twisting pattern in the beam 270 megabits per second between the two planets

You mean like how the bit-rates were juiced up from 2400 baud to 9600 baud on the old modem standards, starting with (iirc) the V.22 on up to the V.32? Boy, that takes me back to my days in IT! Fooling around with the various modem handshakes and encodings used to be a real headache back in the 90's.

Anyway, you've got to be careful here. On the one hand, classically you can talk about the phase of the polarization (again iirc, there's really only two kinds - left circular and right circular with everything else being a sum) and if your detectors are sensitive enough you can encode anywhere from two bits to maybe six or seven bits per period as opposed to just one bit. But to get anything like classical behaviour, you're talking about the contributions of several dozen photons taken together as one piece.

Otoh, what you're really talking about when discussing phase modulation is just what proportion of the photons being detected are spin up as opposed to spin down. So if you're detecting only, say, three photons at a pop, you're not going to be able to pull six bits of info out of the ensemble - in fact, the best you can do in that particular case is maybe . . . three bits.

45:

Was it Varley's Ophiuchi Hotline?

46:

A couple of years ago there was a contest between broadband internet and a pigeon, and the pigeon won, of course:

http://www.physorg.com/news171883994.html

It was only 60 miles, but still.

47:

Which is why you want to always send the first message via the laser. the reversed square law means that it's a lot less dangerous to the guy on the other side

48:

You are quite correct. (As long as the first message doesn't need to be really, really big for some reason.)

49:

Sending unsolicited messages by 0.10 c missile sounds like a hideously dangerous thing to do.

Any sufficiently advanced civilization will detect the propulsion laser easily enough, but they will have no way to know what it's propelling. The payload could be a memory archive, but from an unknowing recipient's perspective, there's nothing to say the payload isn't a bezerker, self-replicating grey goo probe, or a planet busting KKV. There's also no way to find out what the payload is until it's too late to do anything about it.

Launch a memory archive at a more advanced civilization and they might return fire with something much less benevolent.

It's much safer to use a laser to approach unknown civilizations. Photons are less likely to be misinterpreted.

50:

Following up:

3000 atom hits/s ~ 10^13 hits over the 100 year, 10 light year trip.

If a hit packs enough energy to destroy the diamond message payload, you would need 10^13 payloads, or mass equivalent to handle the damage.

The initial tape vs laser calculation suggested an approximately 10^12 energy efficiency improvement using tapes, but this appears to be completely canceled by the interstellar medium. This seems to suggest that you need a deflector that masses a lot less than 10^13 times the payload, or in this case, much less than 100 kg.

51:

Yes, I'm sure he meant The Ophiuchi Hotline, although Macroscope by Piers Anthony used similar themes. Thanks for the link.

52:

"If a hit packs enough energy to destroy the diamond message payload, you would need 10^13 payloads, or mass equivalent to handle the damage."

That's a big 'if' and I'm not sure the conclusion follows from even that. You need a big/tough enough payload to not be destroyed by the hits; it may be that you can deflect or reflect most hits.

53:

Huh? Why a seperate payload at all? Encode bits directly in to the sail, with hundreds of kgs to play with it's within reach of technology available now and plenty of redundancy for relativistic encounters with interstellar dust motes.

54:

With all that energy for communications devices, why not just wobble a planet on its axis and encode bits into the vibrations?

The aliens could easily telescope that; and it'd be much more fun than flinging bits into space.

55:

Greg, any storage medium you send is going to get screwed by high energy cosmic rays. Memory diamond at least uses multiple covalent bonds to encode each bit.

56:

So why would aliens want to contact the most primitive race in the galaxy?

57:

Nope.

The correct protocol is:

  • Set up communication protocol via radio or laser.

  • Wait for confirmation that the other end have produced a starwhisp acceleration/deceleration laser.

  • Announce the vector you're launching on in advance. Target should be several AU away from any valuable real estate.

  • Let the recipients decelerate the light sail at the other end.

  • If they fail to provide deceleration, it goes barrelling through their star system at a distance from their habitat equivalent to the Earth-Jupiter gap. Eventually (circa 10-100 times the flight distance) the payload ablates due to friction with the interstellar medium. Or, once decelerated from relativistic speeds, it becomes a relatively controllable light sail.

    The point is, you don't send a physical data package without prior warning, and you ensure that its course is fail-safe -- that is, it will decay before it hits anything valuable.

    58:

    You're right, I totally borked the light sail. What I've got for energy input would work fine if we could magically convert energy into momentum with no reaction. (Where's a Larry Niven prop when you need one?)

    Does anyone have a good source for the energy issues surrounding laser sails?

    59:

    Dual-purpose laser? An excellent idea, assuming the design constraints are compatible.

    Let's see: if we have 1Gw of output (for our signal laser), that's enough to deliver 0.3 newtons of force (divide by c). If it's pushing a 10-8 gram payload and can focus tightly enough that we can use a 1 gram sail (a thin film ten metres in diameter, perhaps?), then we can accelerate it at 300 m/s^2, or around 30 gees. That should be enough to reach 10% of light speed in a little over a month.

    Obviously, the question of how to build a 1 gram reflective surface that doesn't turn into plasma the instant you drop a gigawatt on it is a major constraint. (We might do better to use a charged coil for a sail, and shoot charged relativistic particles through it to provide the momentum transfer. Then we're into an entirely different tech.)

    The technical problem of focussing a big-ass laser on a 10 metre disk out to BECO -- average speed 5% of c for a month, I make it roughly a 20th of a light year, or 300AU -- is formidable, but if the starwisp is maneuvering to stay in the middle of the beam it's not totally implausible.

    60:

    Actually, the energy levels in play for a 10% of lightspeed starwisp with total mass (including sail) of 1 gram isn't that serious.

    1 tonne of TNT decomposes to release just under 4.2 gigajoules. While the momentum of a one gram starwisp traveling at 10% of c is around 900 gigajoules. (Going by Newton, not Einstein -- relativistic effects don't become significant until we're going a lot faster.) So we're talking about 0.25 kilotons.

    To put it in perspective, that's a fiftieth of the yield of an early A-bomb. It's about a quarter of the gravitational potential energy released by the collapse of the World Trade Center in New York on 9/11.

    While you wouldn't want it to hit you, it's not really weapons-grade.

    (For true relativistic kinetic weapons, there's a handy figure I cribbed from elsewhere: if it's travelling fast enough for the Lorentz contraction to be 0.5 -- around 87% of c -- then its kinetic energy equals half its rest mass. And a mass of one kilogram converted into energy yields 22 megatons. So for a true interstellar weapon, we're looking for something like a one ton rod of tungsten or uranium, incoming at over 85% of c. Which would dump around 10 gigatons into whatever it hits, which is enough to ruin anyone's day. But it's a far cry from what we're discussing here.)

    61:

    Actually, going with data compression as an angle, if we assume that both slow bulk data and fast expensive data is available... What is the economic value of getting old, rich data you already know about?

    Well, let's see.

    Distance (in time and space) obviates privacy concerns. I really don't want my neighbours to know my credit card number and PIN code, but it's of no concern to me if a social scientist 10 light years away gets to read it a century hence. So the deep history issues I was talking about in that speech come into play. We can exchange really sensitive personal information for data mining and statistical analysis and research without fear of it leaking locally.

    Again: astronomical observations. Being able to exchange huge bundles of raw telescope data with a neighbour 10 light years away would be absolutely priceless to astrophysicists and astronomers. (For the first time, we'd have useful parallax across more than an arc-second!)

    A much more speculative issue: mind uploading. If it's possible, for us or for future AIs, then actual personal interstellar travel becomes a matter of some interest. (Upload yourself, then spend a century as a file stashed aboard a starwisp, then download into a new body at the other end.) Obviously, sending just the data describing an astronaut rather than the whole bundle of spare parts will result in a considerable reduction in launch costs. Yes, there are huge (to the point of being nearly unfathomable) side-avenues to explore here, both in terms of practicality and motivation, but it's worth noting.

    62:

    Hi. I'm still up, unfortunately; I was grading finals and against my better judgment - backed by considerable experience - had a (big) cup of coffee around 8:00 pm my time. God, let me count the ways I hate getting old!

    Anyway I'm not sure what you're asking for here; to a good first order, to get the acceleration of a given sail, just divide it's mass by (twice) the incident energy per unit time divided by c. So, for example, if the incident light is 1,000 W/m^2, you've got a momentum change of 21,000/310^8 or about 10^-5 Newtons. So if your sail masses one gram per square meter, that's an acceleration of 10^-5/10-3 or about 0.01 m/sec^2.

    The nice thing about this calculation is that it's completely linear; if you want 100 times the acceleration in this particular example, you need 100 times the incident light, about 100 kW/m^2.

    Turning it around, since a kilowatt-second beam will impart a velocity of 0.01 m/sec and you want to get to 10% c, 310^7 m/sec, you'll need about 310^9 kW-sec to do so, or in more convenient units, roughly 10^6 kW-hours per gram of material.[1] Not a very good use of power (I think we discussed this tradeoff in an earlier thread.)

    Is that the sort of thing you were talking about?

    [1]Or you could have just multiplied your original figures by c :-) But the lengthier analysis gives you a sense of why this is so.

    63:

    Also:

    Firstly, you don't want to put your laser emitter so close to your star that it's output signal is swamped by random noise.

    And secondly, you want to use every trick in the book to focus it.

    The sun acts as a gravitational lens, with a focal point around 550 AU out (13.86 times as far out as Pluto); beyond 550 AU there's effectively infinite focal depth.

    If we're running a laser communication channel between two stars, it therefore makes sense to stick the receiver in line with the sun and the target, so that it can use the solar gravitational lens as a focal device.

    As for the transmitter ...

    We're discussing a 10 light year reference problem; that's about three parsecs (3.12 light years). One AU gives us approximately a third of an arc-second of parallax at that distance. If we place our laser transmitter 180 astronomical units out, we can separate our laser from the sun by about an arc-minute, making reception much easier.

    This means any message we send across interstellar distances has to go via two intermediate stations, both light-days out from where we live (on the edge of the solar system as we understand it -- probably between the Kuiper belt and the Oort cloud). But in return we get to use a gargantuan lens to focus the incoming beam on our receiver, and ensure that our outgoing beam is visibly separated from the sun, making it easier for the folks at the other end to get a lock on it.

    64:

    You are thinking of "The Ophiuchi Hotline" by John Varley. Highly recommended, but written in the early 70s and a little dated on the science side (not to mention requiring positively Larry Niven/Known Space levels of unobtanium to make it work).

    65:

    I think you underestimate the mass of your average planet. By, oh, ten or more orders of magnitude ...

    66:

    Actually, this:

    and this:

    We might do better to use a charged coil for a sail, and shoot charged relativistic particles through it to provide the momentum transfer. Then we're into an entirely different tech.

    Are available more or less right now for what your talking about.

    See, a very simple (and not corrected for relativity) analysis shows that velocity and kinetic energy are split between the propellant and the payload as a ratio of their relative masses. That is, if you throw something out back behind you that weighs ten times as much as you do, you'll end up moving ten times faster than what you threw out. Better: you get the lion's share of the kinetic energy, over 90%.

    You can play this game with ratios of a thousand to one or a hundred thousand to one in the same way: pushing 100 kg backward at one m/sec will push one gram forward at 100,000 m/sec. And that one gram will capture 99.99% of the available kinetic energy.

    How to do this sort of thing in practice? Well, since your payload is so small, you just use a honking big linear accelerator to shove it around. And since you're in space, size isn't an issue . . .

    67:

    Whoops! Even though I can't get to sleep, I'm obviously up way past my bed-time; my first quote didn't get quoted, but it was this:

    What I've got for energy input would work fine if we could magically convert energy into momentum with no reaction. (Where's a Larry Niven prop when you need one?)

    My point was that in certain special circumstances, we have this sort of magic already. Now those circumstances will probably never allow for a multi-ton space ship (in which case you will need some sort of Niven prop.) But for probes that mass on the order of a gram or less they definitely do.

    68:

    This tells me that in the future [cryogenic freezing|temporal stasis|uploaded mind with pause button] will be an essential part of any respectable online gaming rig.

    69:

    What about multi-frequency lasers? There ar e a lot of frequencies. The momentum transered to an object reflecting a photon back in the direction it came from is twice the momentum of the photon. The momentum of a photon is its energy divided by the speed of light. The force exerted by your laser is its power rating times two on c (not newtonian). Accelerating 10^-8 kilogrammes to c/10 requires 0.3 newton seconds, for around 4.5 *10^7 joules 9 *10^7 for a journey with acceleration and decelleration. In fact, that gives us a general formula for light sails 1/2 m.c.v, rather than 1/2 c.v^2

    70:

    That's 1/2 m.v^2, I,m posting from a smartphone.

    71:

    Why do you keep on talking about laser communication for long range purposes? Not only do we not have a laser that can be run continuously at a power rating of 1GW, but nor do we know how to build a laser that is physically sufficiently large to prevent diffraction from taking its toll over the huge distances involved.

    Even if we assume that we could build a phased array of lasers, the efficiency of a laser is so bad, that you will need several major power stations to power it. And it would of course have to be in orbit to avoid atmospheric distortions.

    Radio is just much more efficient and easier to implement, that includes phased arrays spanning the whole planet (well, we do get problems with the pacific, but there are enough islands even there). And physics makes sure that this is the case everywhere, including our interstellar neighbourhood.

    Sorry, but I think you guys fell in love with a very unreasonable idea. (It's a fairly good idea to use lasers where the distances involved are on the order of 0.001% what we're talking about - namely within our solar system. Because of the much lower powers and much smaller lasers required.)

    72:

    CS: I think you underestimate the mass of your average planet.

    Maybe. But we could use a smaller than average planet.

    And remember, we'd be broadcasting to anyone who can see our wobbles, so that's maybe 1000 civilisations at once, and no need to waste decades negotiating the right to start transmitting. Your laser starwisp needs separate power supplies for each of those narrowcast communications.

    I think I an on to something -- even if you may think I an on something.

    73:

    The NIF isn't a continuous laser, but its pulsed power output is around 0.5 terawatts, and while that's a compressed pulse, the capacitor bank that energises it is able to store around 500MJ (mega, not milli).

    More to the point, the 1GW laser is a reference point. I'm fairly sure I've seen proposals for interstellar comms lasers that are an order of magnitude -- or even two -- lower-powered, albeit with lower bandwidth; that's just the one for which I was first able to find some pre-worked calculations.

    The problem with radio is the inverse cube law, unless you go for a maser instead -- in which case you've got a coherent emitter (like a laser) albeit with lower peak bandwidth (because of the wavelength it operates at).

    74:

    Two large area thin parallel mirrors, each weighing 1g and with a perfect reflectivity, spaced 1cm apart initially. Starting with a 1kJ light introduced into the cavity, and assuming no side losses, what is the acceleration of the mirrors, and their final velocity?

    75:

    Erm, it's not a cube law, it's a square law. The inverse cube law is only valid for the near-field dipole field, not electromagnetic waves.

    And this means simply that if you want to communicate over 10 times the distance, you either need to increase the area of your transmitter dish by a factor of 100 or that of the receiver dish or the power of your transmitter or any combination thereof.

    If you can change both sides and only consider the diameter of a single dish, then things turn into a square root law. If you want to communicate over 100 times the distance, you need 10 times the diameter on both sides - without changing the power of the transmitter.

    76:

    To be polite:

    32: Even assuming a fairly stately 1G of acceleration for the light-sail, it will only take approx 353 days to reach 10% of light speed.

    353 very short days; 30,000,000 m/s by 10 m/s/s works out to 3,000,000 seconds or about 35 days of our Earth Days.

    34: As others have noted, the interstellar medium is the problem.

    This density of this medium ~ 1 atoms/cm^-3

    We're in a region of million-degree, ionized, low-density hydrogen gas.

    Enjoy!

    http://www.solstation.com/x-objects/chimney.htm

    77:

    Also, if you use a Dyson-Nicoll Laser, not only will bandwidth be pleasantly high, you could just write your message in the surface of the target world.

    78:

    If the diamond does not include additional mass that can read/write the data, then in addition to the travel energy, you should consider:

    a) The energy to write to the diamond. If it's your civilization on both ends, or you are not willing to categorically ignore the other person's pain, the energy to read from the diamond is also important. b) The energy to manufacture the diamond. c) The energy to get the diamond (or the information) into space so that the laser sail is even possible. d) Error correction, which you (correctly!) applied to the laser, but failed to apply to the diamond. In fact, the diamond suffers from a bad case of eggs in basket - with the laser, packet loss can is statistical, while with the diamond packet loss is catastrophic.

    On that last point:

    Laser: Assuming that we are using naive data redundancy, and that this requires 1,000 bits per actual bit, there is still a chance that a block of data (arbitrarily, one second's worth) will be gibberish when it arrives, and that the data cannot be recovered. An ARQ (automatic repeat request) will be sent out, and that block of data will end up taking 30 years instead of 10 years. If that block is essential to the entire message, the whole message will be held up, of course, but otherwise, only that block will suffer the hold-up.

    Diamond: Assuming we want the same level of security, one thousand diamonds are sent out, and there is still a chance that no diamond will make it to the end. The resulting ARQ will result in a 300-year latency instead of 100, and the latency will always be for the entire message.

    79:

    which is the most energetically efficient way to transfer data

    My humble opinion: if a civilization still asks about efficiency, it's too early for space colonization. "Too cheap to meter" first, Galactic Empire second.

    80:

    PS. I means interstellar space colonization, of course.

    81:

    We're in a region of million-degree, ionized, low-density hydrogen gas.

    Fascinating! Great reference.

    82:

    Modulate the launch laser with the instructions for decoding the later compressed instructions for building the deceleration laser and a return starwisp.

    83:

    "Maths! Why did it have to be maths?"

    This discussion reminds me of The Genesis Quest by Donald Moffitt, wherein humanity harnesses the power of a sun to transmit a radio stream to other galaxies.

    Wikipedia: "The Genesis Quest gets around the problems involved with intergalactic travel, namely the distance, by avoiding the traditional staple of science fiction, faster than light travel. Instead Moffitt opts for a different tactic, that of having an alien race (The Nar) assemble humans from a stream of genetic information transmitted by radio from the Milky Way Galaxy. The resulting colony of humans spend some time integrated into the Nar society before growing randy, discovering the secret of human longevity, and embarking on the seemingly impossible millennia-long mission of a physical journey back to earth. This epic journey is made in a gigantic space-grown semi-sentient Dyson tree known as Yggdrasil."

    84:

    Well, alright, and maybe I'm being a wee bit cynical here, but what happens if that data that you send to benign Aliens who are Cute and Lovable members ..of Course they are who could doubt it? .. Alien Civilisations is Hacked on the Way to the Benign Aliens - henceforth to be called ' BAs ' as in BA BA Black Sheep and all that sort of thing.

    Forget about the immediately short time focused Human Interests in making Money/Value and consider a Clan? /Family that is interested in investing in the future of their ... 'Polity' ? Religion/Nation/Notional interests, and then there is the future equivalents of Moro Pirates ..translate this following into the Futures' Equivalents ...

    http://en.wikipedia.org/wiki/Moro_Pirates

    There can surely be no doubt that however this Empire of Info and Galactic Exploration does work out there just has to be a way of extracting Money/Value from this project by Pirates " Arrrrr, Jim ' .. /Charlie .. Lad "

    Um-Less you really believe in Universally Happy Shiny People in the Wonderland that is To-morrow?

    And of course we can forget about the equivalents of Trolls and Mischief makers hacking the Interstellar Data Stream can't we? See HAPPY SHINY PEOPLE of the future. Why this Sabotage? Why not?

    85:

    Fascinating.

    So how does one set up communications in the first place? How would we know to shine a big laser on some artifact coming to our solar system at 10% the speed of light? (Even assuming we'd detect it, which is highly unlikely). Would we worry that shining the laser (to slow it down) would be interpreted as an attack?

    It would be very sad to have a bunch of light-sails with messages flying-by at at an appreciable fraction of the speed of light. We're not expecting to see these messages. If one of those hit an asteroid or planet, it would made a big boom, given all that kinetic energy. Much, much, much more likely, the messages would just sail by without anyone noticing.

    It seems that this approach is something to consider after first contact is made, and somehow, folks agree upon a communications protocol. Or, it's something that would work better if a civilization managed to spread across multiple star systems so they share a common expectation to get messages this way.

    On the question of detecting space-ships / probes / messages moving at 10% of the speed of light, is it possible to detect these things? I mean if there's something moving that FAST through the interstellar medium, wouldn't it heat up lots of hydrogen atoms in the interstellar medium? Shouldn't that give off some sort of x-ray / UV / infrared or other radiation that we could detect?

    86:
    • which reminds me of the unfortunate acronym for the Moro Islamic Liberation Front...

    Trolls In Space? I expect they are Goonswarm. I hope our neighbourhood isn't a 0.0 warfare zone.

    87:

    Oh, and as a follow on from my post of 84.

    Fermi Paradox and Where are those Aliens? They were/are there but their Data Stream or Teeny/Gigantic Interstellar Craft of whatever stamp have been intercepted and destroyed, either by deliberation or accident, by the Alien Equivalents of Born Again True Believers who have their Races best interests at Heart ... and who are in the mean - Geological/Interstellar - time interested in profiting from the Equivalents of the Spanish/Nigerian Prisoner Scam.

    88:

    Dear Sir or Madam,

    Our charitable organization helps newly discovered planets, and the Fzzgrz Development Bank of Rigel IV is holding a 24 million Quatloo sum deliverable to your government after the reception of the correct routing and protocol information...

    89:

    You can't assume that humans will be doing maintenance at the time period when interstellar communications are being exchanged. Already a lot of that has been automated.

    90:

    Here's an idea for a more modern replacement for that codebook concept: Define a standard VM, javavm, python, parrot, Smalltalk, intel, even MIXX or UCSD (Pascal). It almost doesn't matter. Now you send programs that, when fed into that machine, generate the messages. (I'd prefer something that could handle Unicode easily, but that's me, with my local preferences. It makes MY translation programs simpler, Anything that will handle bytes, or some reasonable analog, would suffice.)

    Note that because it's a VM, it knows nothing about the implementing platform, so it can't pass worms. And because it's a VM, it can be implemented on ANY sufficiently powerful computer. So the implementing platform can change freely under it, as long as the VM is kept updated. All that needs to be exchanged is compressed binary programs. Of course this can include simple numeric or textual data, but the VM handles the conversion of this from one form of representation to another.

    But do note that this doesn't differentiate between sending via laser or storage medium. In both cases you need redundancy to repair damaged transmissions. And I don't think an error correcting code would suffice in either case. I think in addition you are going to require multiple independent transmissions, separated by, perhaps, a month or so.

    91:
    Two large area thin parallel mirrors, each weighing 1g and with a perfect reflectivity, spaced 1cm apart initially. Starting with a 1kJ light introduced into the cavity, and assuming no side losses, what is the acceleration of the mirrors, and their final velocity?

    That's easy, one shoots off to the left at 1,000 m/sec, the other to the right at 1,000 m/sec . . . or rather this is the limit that is approached asymptotically.

    Ain't ideal materials working perfectly within classical limits fun?

    Side note: this still happens once or twice a year for the obvious reasons but - Must. Not. Drink. Coffee. After. Five.

    92:
    So how does one set up communications in the first place? How would we know to shine a big laser on some artifact coming to our solar system at 10% the speed of light? (Even assuming we'd detect it, which is highly unlikely).

    The classic answer - the one used by The Ophiuci Hotline in fact - is to have your zingy tasty message laser aimed somewhere out in the Oort cloud rather than directly at the inner system. So it's a filter as well (Yet Another Old Trope, ala Clark's Monolith); you can't detect the traffic that's flying by all around you until you're advanced enough to not be whining to the Message Bearers for Advanced Tech.

    93:

    That's not the tricky or surprising part - the acceleration is.

    94:

    Can you focus microwaves sharply over that distance? Longer wave lengths have a greater tendency to spread. I think laser sail is the correct approach. (This is just off the top of my head and could easily be wrong, I admit. Particularly if the interstellar dust doesn't diffuse microwaves and does diffuse laser.)

    OTOH, locally we're in a void, so this may not be significant. Still, the more sharply you can focus the bean, the less power you waste (assuming equivalent rates of interaction).

    95:

    This has impact on the Fermi paradox because if everyone is exchanging physical messages most of the time, there won't be any radio signals to overhear.

    It looks like quite a plausible answer if we assume that civilizations don't go in for sending our replicating probes.

    96:

    <whimsy> And there's the short answer to the Fermi Paradox:   Any sufficiently sophisticated space-faring species has established firewall technologies in place, because they prefer not to get spammed.

    Without the secret handshake, we are unlikely to ever notice their traffic. Perhaps SETI should place an order for a dozen gross crates of chickens and resend?

    </whimsy>

    97:

    Even supposing ETI are sending physical packets, does this really solve the FP if the packets are sent by beamed sails?

    Beam spread will result in some detectable signal, even if it is fairly transient, like the WOW signal. Wouldn't we expect to see such a signal on occasion for various directions in the sky?

    If we used the sun as a gravity lens, a number of receiving antennas in different orbits might well detect a beam even passing by the solar system, so I doubt that their presence would be invisible. No signal would suggest either no physical messaging or a very different method of acceleration.

    98:

    For whoever may care: The idea that the best compression for some data is the shortest program generating that data was apparently developed by Kolmogorov, though according to Wikipedia, Chaitin and Solomonoff also played with the idea. Have a look at

    http://en.wikipedia.org/wiki/Kolmogorov_complexity

    and

    http://en.wikipedia.org/wiki/Invariance_theorem

    I won't claim to understand the details, but the general idea seems streight forward enough.

    99:
    Forget about the immediately short time focused Human Interests in making Money/Value and consider a Clan? /Family that is interested in investing in the future of their ... 'Polity' ? Religion/Nation/Notional interests, and then there is the future equivalents of Moro Pirates ..translate this following into the Futures' Equivalents ...

    This is actually my second most preferred WAG for the distinct lack of other-wordly intelligence observations: They're out there all right, but until we actually have something of value they want, they are distinctly uninterested in going to the rather capital-intensive effort of announcing themselves.[1]

    But what about all those pure science research types who want to catch us in our native state before we're despoiled by Advanced Tech, or so some romantics will wail. Well, the truth is, pure academic researchers don't get squat from the Gross Interstellar Product . . . same as here.

    It's the high tech civilizations that are truly rare, you see. And it's only they that are truly interesting in their local adaptations of the Universal Tech. What's the best way to power an advanced society, for example, is currently being hotly debated on our Earth. Turns out in retrospect that of the four or five basic options available to any species, the answer is "all of them", with what is ultimately adopted saying more about the tool users themselves than about basic physics and engineering principles.

    And in fact, it turns out that life is rather common - you see single-celled life once every ten light-years, complex multi-celled life every fifty light-years, intelligence (with the stipulation that part of the definition of intelligence is "tool using") every 250 light-years, . . .and advanced tool-using intelligence that's survived the Great Filter (namely that brief window of opportunity every intelligent species has to develop a sustainable high-tech, high-energy society before exhausting the planetary store of non-renewable resources) every 25,000 light-years.

    So while the initial enthusiasm of your typical high-tech species will fund the missions to find extra-terrestrial life, revenue tends to dry up after the public finds gets hip to the fact that your typical tool-users are all about mud pies and slavery and their most interesting technological items are the cunning local variations on skull-cracking war clubs. This is of course another ancient and honorable trope in sf, which I first encountered in Hoyle's Into Deepest Space and where it turns out we didn't make the cut.

    [1] I first read this one in of all places, a Winston Juvenile, "Planet of Light" by Raymond F. Jones. Turns out that high civilization was reluctant to make contact with the myriads of not-so-advanced planets (of which we were one) on account of the fact that the first these mooks did was to start yammering for fusion power, antigravity, nanotech, the secret of immortality, etc. Which was also expected to be provided for free, of course, and whose first use (in the rare cases where it was provided) most frequently being to subject and oppress the surrounding tribes and countries.

    Hey, it's a Peter Watts world out there.

    100:

    Not necessarily, adding more channels does not require you to add more power, it simply lets you say more things with the power you have. A multi-channel communicator can say more things with the same effort.

    If you had to communicate a message to me by sending me colored tokens, the amount of info you could convey to me with some fixed number of tokens is dependent on how many colors you have access to.

    I am not saying send more photons, I am saying send the same number, but use more kinds of them.

    101:

    Assuming that proper precautions are taken to make sure a physical message arrives in a usable form, the truckload of tapes has a distinct advantage in that it's going to get there no matter what. After the initial push, the message is on its way in its entirety and famine, terrorism, bankruptcy, or government overthrow aren't going to stop it from eventually making it to the destination. Of course, some communication halting event will also make an eventual reply pretty catastrophic as potentially hundreds of years pass until the post unexpectedly arrives at relativistic velocities.

    102:

    That's true, but is it more energy efficient? How much energy do two computers burn sending each other 1TB of data? How much gas does a postal truck take to run? Also, think about the scaling. A postal truck going twice as far takes twice as much gas, but two computers separated by twice the distance don't need twice as much energy to send the message.

    103:

    Of course any civilisation with a total energy generation capability of >1.75 on the Kardashev scale will probably be able to limit the amount of output from their sun in a given direction as a direct result of their dyson swarm based power supply infrastructure, and such a "Nicoll-Dyson semaphore" would be the most energy efficient means of communicating across stellar distances... unless there's some intrinsic limitations to that method - the obvious problem is that bit rate of the semaphore is limited by the resolution of the target system's telescopes, and the process isn't as intrinsically "eye-catching" as spamming the universe with von-neuman or bracewell probes.

    104:

    Needed to look up most of the terms being used in this thread - however found a reference you might be interested in:

    Optical and Fiber Communications – Free-Space Communications by Arun K. Majumdar, Jennifer Crider Ricklin. I think what you're looking for starts around page 393.

    Here's the book description -

    This book provides a comprehensive, unified tutorial covering the most recent advances in the emerging technology of free-space laser communications (FSLC), where interest and attention continue to grow along with the number of technical challenges. This book is intended as an all-inclusive source to serve the needs of those who require information about the basics of FSLC, as well as up-to-date advanced knowledge of the state-of-the-art in the technologies available today. Topics covered include a combination of atmospheric effects for laser propagation and ...

    105:

    There's a free PDF on Holographic Optical Beam-Steering (HOBS) that might also interest you: it claims to overcome some of the limitations of laser communications such as the need for unobstructed line-of-sight.

    http://www.wpi.edu/Pubs/E-project/Available/E-project-101310-153124/unrestricted/HOBS_MQP.pdf

    106:

    Signal loss might be less dramatic than thought if you have a read of this.

    107:

    When we get to the point where we can't afford to replace communications satellites, we'll instead repurpose our ICBM fleet (downscaled, of course, although bigger than the LDRS models), to ship memory crystals between continents

    This doesn't make any sense at all. We mostly use comsats for military stuff, TV, and for some in-fill backhaul. These uses don't match with anything like that. Everything else is fibre.

    108:

    You may find the answer if you read the last sentence of that paragraph again. To be less opaque: Under the general assumption that such technology arises from predecessors here on Earth, I figure that if we're going to get in the infomissile business, we're going to see predecessors of it on Earth long before it gets implemented in space. Therefore, we're probably going to be using laser-powered ICBMs to ship exabyte flash-drives to Uzbekistan long before we start using similar devices on an interplanetary or interstellar scale. If you think this is ludicrous, that's kind of the point.

    Shipping information this way looks a lot like a military threat, if not an outright attack. As Charlie noted, a gram-sized starwisp "only" takes something with approximately the energy of a Hiroshima-style nuke to kick it up to speed, but it involves targeting that's better than we can do on Earth (akin to me using a hyper-rifle to shoot a message someone in Iran, while allowing for windage, local variations in gravity, and of course the turning of the Earth while the bullet travels), and similar super technology. In other words, it's quite hard to make such message transfers obviously non-threatening. Having the technology to ballistically target another star system would say quite a lot about our threat potential, whether we meant it to or not.

    109:

    That article would be more exciting if it weren't published on April 1st.

    110:

    That was a 1st of April report...

    111:

    TP has a point, though. Thanks to the uncertainty principle, there's a limit to how collimated a laser beam can be.

    My books are in storage. Could somebody calculate the minimum uncertainty in the beam's lateral momentum for, say, a 5cm diameter, 500 nm (2.5 eV) laser and figure out what sort of illuminated area we'd expect at a 20 light-year (i.e. Gilese) range?

    I'm thinking that with a telescope linked to a monochromator and a photomultiplier, it may be doable, but I haven't done the math.

    112:

    The key word here for lasers (and other gaussian beams, should you find one in your pocket) is "Rayleigh Range" - Wr = ( pi * (w0)^2 ) / lambda. lambda is light wavelength, w0 is beam waist / minimum cross section width. pi is sometimes obvious.

    Radios end up being searchlights, of finite but constant beam spread. Gaussian beams, you can have a hope to keep usefully focused out to very long distances (to a point, defined roughly by Wr).

    Generally, this discussion is focusing overly on bandwidth and not enough on latency. Some information is useful whenever; some is not.

    Some information will be useful with a 1.0 C propagation rate, and useless at 0.1 C. Some information's utility will not depend on the time in any significant way.

    Still other information will not even be useful over likely interstellar distances at 1.0 C (or 0.5 C for round-trip conversations). Human judgement, applied at a distance to rapidly evolving situations, may well be without utility at even the 0.5 C of round trip lightspeed communications.

    The third information flow mechanism would obviously be, in situations demanding it, the flow of "the right few people" being sent to Colony X to deal with something in real time, where the tyrrany of C made any other form of remote advice or help useless. I have a story cooking along these lines.

    113:

    Story idea: We set up a protocol for receiving the memory packet from the aliens, but society collapses shortly thereafter. Receipt of the packet (in orbit) sparks a space-race between neo-primitive empires.

    114:

    Earth is a special case: we can lay fibre. It looks as if optical bandwidth maxes out, with wavelength-division multiplexing, at around 1 tb/sec. However, you never lay just one fibre -- you can lay a bundle. Thousands, for that matter. Want to ship 1018 bits -- a thousand petabits -- from the USA to Europe? Over a thousand-fibre bundle, it'll take you on the order of a thousand seconds with virtually no latency (well, a few tens of milliseconds). Whereas if you want to ship it via memory diamond you have to serialize it, load it on a rocket, launch it across the ocean, wait 20-30 minutes for it to land, retrieve it, and unserialize it. Which is going to take a lot longer and cost a bunch more. Yes, laying an intercontinental cable costs billions (read that link! It's fun!) but so does developing a ballistic intercontinental cargo system, and the running costs for the latter are almost certainly higher.

    It's not until the amount of data you want to ship from (a) to (b) gets into serious exabit territory that physically moving it becomes cost-effective compared to truly maxed-out fibre-optic cable, And once you hit that point, you've got serious issues with getting the data in and out of the serialized form at either end ...

    115:

    It's just that the Rayleigh range is zero for all practical purposes in long range communication. It's about one third the distance to Pluto for a 1km wide 500nm laser - about two light hours. If you manage to build a 10km wide resonance chamber with perfect optics (tens of nm) and keep it collimated, you can get 0.04 ly. If you can turn a large asteroid into a 100km wide laser, you could just about project a 100km wide laser beam on a planet on Alpha Centauri for the split (and diced) second you can hold it steady on that spot over that distance...

    Any significant distance beyond that, a laser works approximately like any other source of electromagnetic radiation (search light, radar, whatever). Intensity falls off with the distance squared as the beam width increases linearly with distance.

    In other words: why bother with all that, when you can simply build a damn dumb dish?

    116:

    I note that you just pointed out one particular large source of latency there: the time it takes to serialise the data on and off the crystal. The stationwagon full of backup tapes has always struck me as missing that point: the time taken to write a single backup tape is non-trivial.

    This is not to say that Tannenbaum was wrong, but unless you can efficiently parallelise that writing/reading process, you end up with interesting bottlenecks. Given enough distance, that gets amortised into the travel time, but it can make it unexpectedly inefficient for distances that are too short.

    For interplanetary communication, fibre is obviously out. Unless, well, perhaps we could string fibres between the planets. Perhaps we could have some form of biological creatures creating them. Something like giant spiders spinning webs across the solar system.

    Brian Aldiss? Paging Brian Aldiss?

    117:

    Okay, you disliked my first idea for costing orders of magnitude more than yours. So here's a second idea that is orders of magnitudes cheaper than yours.

    Instead of spending zillions sending photons (or firing photons to send atoms) why not just modulate photons that are going there for free?

    We tow a sail array, perhaps 1km square when unfurled to approximatelt 1AU away from the sun. Position it in direct line of sight between our proto-pen-pal aliens and the sun.

    Then somehow (mechanically, photo-electrically, whateverly) modulate the light that it is blocking.

    A flickering star will attract the attention of aliens at well below out current technical level; even the ancient Egyptians would have looked up if a star winked at them.

    Most of the energy needed to run the interstellar heliograph can come from a solar array floating nearby.

    It'll need the occassional resupply of volatiles to keep it hot/cool and/or on station.

    With the money saved, we can launch hundreds, and be chatting away across the galaxy in no time.

    118:

    A 1-bit signal?

    OK, a 1 bit per however long it takes to modulate an entire sail.

    The problem with this solution is that it is vastly more expensive per bit sent than just shining a laser. It's possibly using less power per second, but its data rate is so dismal that its energy per bit is right down.

    Also, you're intercepting one part in 10^12 of the Sun's light. It's going to be lost in the noise: a solar flare will be billions of times brighter as far as our distant observers are concerned.

    119:

    Aargh! I now find myself hooked into reading a 56 (yes, fifty-six!) page article. I have real work to do!

    Damn you Mr Stross, purveyor of geek-trivia-crack!

    120:

    It gets better if you read it back-to-back with "Cryptonomicon" -- putting two and two together, Neal managed to get WIRED to fund his research for that novel, and the article you're reading (which at 50,000 words or thereabouts is actually a short book) is his write-up.

    121:

    Try using the "for Print"version. It'll stack all the 56 pages in a single one.

    122:

    Around 10 million gravities, initially

    123:

    Ah, the cunning wiles of the professional author! (Though, as you point out, I think they got their money's worth out of him)

    Another book on the looooong "to read" list.

    124:

    The serialization cost was considered in the last Tanenbaum implementation I heard of, which involved transferring a 'more primary' source of the data - when Aardman Animations were collaborating with Dreamworks on Curse Of The Were-Rabbit, they leased significant fibre capacity for transfer between the UK and Florida. If the line went down, the backup plan involved yanking the hard drive and a rolling booking with BA...

    125:

    A 1-bit signal? Not necessarily; do not confuse bit rate with baud rate.

    We could modulate/flicker/transform the signal in many ways, each of which would up the bit rate.

    And we could have several devices all aimed at a slighly different point in the target system, so parallel transmission could happen.

    The transmission rate would be limited only by the sensitivity of the alien's detectors.

    If we were friendly, we start at a low bite rate.

    The first thing we'd transmit, perhaps the only thing we'd transmit initially, is the operating manual. a Once the aliens have signaled direct to the device with their preferred transmission rate, we'd go to full on info mode.

    So we're low power until they complete the handshake.

    No annoyingly beaming lights in their eyes to get their attention. Much more friendly.

    126:

    You have reminded me of the small amount of time I spent hanging out with my buddy's grandfather back in college. He was way older than us, but he was "one of us" in the techie/hacker sense. At the time, we were playing around with tube lasers (diode lasers being too expensive to be practical as of yet), and he showed us his portable laser pointer that he rigged up from a HeNe tube and a car battery. He showed us his tube-based ham radio gear that he kept running decades beyond its expected life via the trick of running all the tubes at 100 volts instead of 120 volts, dramatically reducing the wear on them.

    The reason you reminded me of him is the reason he was brought out of retirement from time to time. He was one of the people who knew how to do work on the ELF radio systems that were so important to the Navy. This was an extremely low bandwidth mechanism, but it has the advantage of penetrating seawater. So, all they'd ever send out on it was "hey you, submarine, message incoming, so poke an antenna up through the surface please". The bandwidth was too low for much more than that, but that one use gave it critical importance.

    (See the connection?)

    (Some day I should tell you about the test my laser-enthusiast buddies came up with for determining the quality of a beer.)

    127:

    The author photo on my copy of Cryptonomicon (U.S. hardback) is in fact the same picture of Stephenson that was on the cover of that issue of WIRED.

    128:

    Actually, Charlie, I do know a bit about communications lines.

    The bigger point is that a system that launches interstellar memory-diamond communications isn't going to come out of nowhere. It has to have technical predecessors. One of the bigger barriers to such a system being built is that we've got little or no reason to build its necessary precursors (memory diamond missiles, sub-micrometer level aiming precision for bullet-sized ballistic projectiles over intercontinental distances, bullets that can stop themselves on cue, etc) because we've got other technology that can do the same job more simply, and we've had it for decades. Even in a Post Oil situation, I wouldn't be surprised if they don't keep laying communications cables, simply because the first cable-laying ship showed up in 1868 or thereabouts.

    129:

    First transatlantic cable 1866 - laid by the late I. K. Brunel's Great Eastern

    130:

    Well, there's two things: total energy cost of setting something up from scratch, and marginal energy cost given a mature system. Charlie seems to be talking about the latter, which is appropriate for a hypothetical mature interstellar civilization.

    131:

    A sandwich of a hundred or so of the wavelength-thick layers of coating on photographic lenses, should be quite opaque if layered right (for a thin wavelength band), and can be below 100 microns in thickness. At the Earth’s diameter a disk of that thickness would be a few cubic kilometers; that’s a lot for space, but not that huge. A minor asteroid is plenty.

    Make the disk stable by orbiting it around a central mass. (Think a somewhat larger central asteroid surrounded by something like Saturn’s rings but made of ridiculously thin mirrors.)

    Now put the whole think in a fixed spot relative to the Sun-target planet axis; it’s far from the Sun and large and thin, you can probably stabilize it just with clever use of photon pressure.

    Then you can basically send at any bandwidth you can modulate the “umbrella”’s opacity at.

    132:
    That's not the tricky or surprising part - the acceleration is.

    Oh, you want me to do your homework for you, eh :-) Sorry, finals humor.

    Anyway, I didn't give you the acceleration because, among other reasons, you haven't given enough information to solve the problem. Let me guess, you probably got the velocity increment of the sail as something like Δv=2E/(mc) or Δv=aE, with a=2/(mc), and you probably got the energy decremented for the initial amount as ΔE=2E^2/(mc^2) or ΔE=bE^2, with b=2/(mc^2). And you probably got a messy sum, but you did something with it that resulted in some impossibly high acceleration.

    Well, that's because you specified an amount of energy and not an amount of power, so of course your calculations are going to be out of whack (you can see this by applying dimensional analysis to your increments). In fact, if you worked the problem assuming that the sail instantaneously reflected a thousand-joule pulse of light, you'd get infinite acceleration. For a more physical answer, you might assume that the sail intercepts a thousand joules of energy in the time it takes light to cross one centimeter (since that was the initial distance you specified.) But a light-centimeter is equivalent to 0.01/10^8=10^-10 seconds; iow, in watts, your sail is intercepting something 1,000/10^-10=10^13 W, which results in an acceleration on the order of 10^7 m/sec^2! To put it another way, yes, given what you've specified, your calculations showing that your sail will accelerate very fast are indeed correct.

    Apologies for being in exposition mode; again, it's finals time here and I have to explain to students why they got the problems wrong that they did, and no, I'm not being a hard grader. Quite the contrary ;-(

    133:

    One aspect which doesn't seem to be factored into this is the marginal value of each signal bit. The first bit, of "Hi I'm here!", is an awfully important one. The next few million while we exchange basic information on biology, language, mathematics, and engineering are also quite interesting. The last 2 gigs of the 1080i footage of the 2012 snowshoe marathon championship is a bit more niche. It seems quite possible (depending on both the signal and wisp parameters, and the probability that we are able to understand each other in any meaningful way and/or having anything useful to say) that with a signal laser one could capture 99% of the value of the exchange in the first year of transfer, long before the wisp arrives, much less someone sending one back on a return trip.

    134:

    TP1024 writes:

    It's just that the Rayleigh range is zero for all practical purposes in long range communication.

    ...and you then go on to discuss only projects with diameters up to about 100 km.

    Really large - thousands of km or larger - lasers and phase-synchronized laser arrays are on the short list of technologies for interstellar transport, given lightsail drive requirements (though, for very small probes, one can do with masers and remarkably small apertures, to start with...). People who do big lasers have been involved in these projects and don't consider the scaleup to be that ridiculous compared to the overall scope of building interstellar range lightsails and explorer vehicles.

    Fundamentally, either you have to posit you're talking at interstellar distances with someone or something you sent there, in which case the technology base needed to handle interstellar transport is available, or with someone or something else who was naturally there on their own, which is likely at significant ranges unless they're just so freaking common in the universe that credulity is strained.

    If it's someone you sent, you either have to posit something like lightsails (and thus large lasers) or something like antimatter (and energy levels to produce that which are even more insane), or technology we currently don't understand and can't characterize usefully. Once you have lasers to drive lightsail craft, then interstellar communications becomes a relatively easy spinoff.

    135:
    Around 10 million gravities, initially

    Ah, just saw this; I've been rather busy lately. Does my explanation make sense? Or did you figure this out and then went into Ripley's Believe It or Not mode when you posted your question? The latter, I'm guessing.

    These high accelerations aren't all that implausible for solar sailing, btw. It's quite possible for small particles to accelerate at many hundred's of gees when they're close to the Sun or square in the beam of a laser of even modest power.

    That's also why I went with light-propelled fairy dust in my original post on the other thread - for very small particles, it's no problem to jet around the system at hundred's of kilometers a second using only passive light propulsion. The hard part is packing that much intelligence into something that small, but even there I think I can safely say that as a design concept intelligent dust assumes only what we in the sf community would cavalierly describe as "modest" advances in chip fabbing.

    136:

    One thing I particularly like about the concept of beamed sail propulsion is that the investment in masers/lasers can be done for peaceful purposes. Initially you have SPSs with beams directed to earth. Then you have the powersats moved to closer solar orbits (assuming the net power received is much higher and cheaper). This allows development of focusing technology. Now it becomes a just a modification of an off the shelf technology to power any interplanetary spacecraft (Dan Dare Spacefleet-like ship propulsion or sails depending on mission). Finally you start driving interstellar vehicles.

    Technology development isn't spent on technology that has little to no short term benefit to taxpayers as it would be for an Icarus type stellar probe. Spending would have a return and the eventual economic growth plus more mature technology would make interstellar probes/data packets a viable and relatively inexpensive project, with much of the technology redeployable after use.

    137:

    The obviously easy answer is to work out velocities assuming all the light energy ends up as kinetic. The acceleration comes from the photon pressure. Consider 1kJ bouncing around a 1cm cavity. That's each face of a 1g mirror reflecting 30TW per second.

    Photon pressure (reflected) for 1W is about 10 nPa for a 1 square metre area. Total pressure then becomes: 3.10^13 x 10^-8 Newtons = 300,000 N F = ma a = F/m = 300,000/0.001 = 3.10^8 m/s^2 = 30 million Gees

    Correct to within an order (or two) of magnitude!

    138:

    Although I have to say, I suspect a flaw in the calculations due to the way I averaged the cavity power over 1 second. As the mirrors start to move apart the power falls rapidly and the calculations fall apart. I will have to do a proper analysis one day (or post it on sci.physics.research !)

    139:
    Although I have to say, I suspect a flaw in the calculations due to the way I averaged the cavity power over 1 second. As the mirrors start to move apart the power falls rapidly and the calculations fall apart. I will have to do a proper analysis one day (or post it on sci.physics.research !)

    Yeah, that's the part I thought you wanted help with. It's not difficult, but it is messy: You get velocity increment Δv1=a*E1, Δv2=a*E2, . . . Δvi=a*Ei, . . . , etc, with E1 being the original amount of energy, E2=E1-b*E1^2, E3=E2-bE_2^2=(E_1-bE1^2)-b*(E1-b*E1^2)^2, etc. Playing around with the successive expansions gives you a reasonably tidy summation formula for successive values of Ei. But once you get that, you still have to sum up the individual v_i's, make substitutions, correct for changing distances, etc, at which point the sum gets just a bit more complicated. Since manipulating these types of expressions are just what Mathematica or even Excel, er, excels at, I didn't make the effort to see if things got tidier. But it is doable for all of that, and easily so. Providing you use quantities with the correct dimensional attributes, of course :-)

    140:

    Yes I meant The Ophiuchi Hotline. See why I could not spell it.
    To do most, if not all, the things posted here unobtanium is needed. THINKING IT IS THE EASY PART.

    141:

    Would it be feasible to send, instead of largeish (gram-weight) packages every decade, more dust-sized ones with very high frequency?
    Also, at that size, could you use the target's stellar wind for braking?

    While we're at it, any good way of measuring anomalous dust currents near very old stars?

    142:

    I could probably reduce it to an equation with an integral sign in front, but I'm too busy/lazy right now. I still suspect that almost all the acceleration would be in the first cm of travel

    143:

    Aricebo (305m) is nowhere near the largest practical size for a radio antenna. At this very moment, there are ~1000m antennas in orbit. They're mostly made of nitinol memory wire, crammed into a capsule for launch, and then allowed to re-expand over the course of a few days.

    The NSA uses them to eavesdrop on cellular phones.

    144:

    I thought it was 100m, which is certainly more than enough to pick up mobile phones

    145:

    Links about nitinol antennas?

    147:

    Nothing there about nitinol, though...

    148:

    Memory allows are used to deploy antennas. Not sure if the antennas are made from it though

    149:

    The mirror pair system physics needs to include the energy lost in the photons due to moving mirrors, the non-perfect mirrors (even 99.99% means that losses over 10000 bounces will be catastrophic), etc.

    Photon thrust is extremely useful but not infinitely useful ;-)

    150:

    Well, there are obviously some idealizations involved! Not to mention that as the photons lose energy their wavelength increases and mirror spacing becomes critical. However, in the microwave region superconducting mirrors can be far more efficient than the best optical ones.

    Also, peripherally related is this fascinating article: http://www.technologyreview.com/blog/arxiv/23198/

    151:
    The mirror pair system physics needs to include the energy lost in the photons due to moving mirrors, the non-perfect mirrors (even 99.99% means that losses over 10000 bounces will be catastrophic), etc. Photon thrust is extremely useful but not infinitely useful ;-)

    Well this was an explicitly idealized setup using those classical perfect mirrors, frictionless pulleys, etc. But our photon piston does illustrate what I call the O'Neill effect (after Gerard O'Neill of course), which is propensity to jump from "physics says this is possible" (and it's dodgier cousin, "nothing in physics says this is impossible") to "so the rest is just a matter of engineering".[1] I mention this, of course, because this type of argument seems to be a perennial favorite of space enthusiasts. It's kind of cute actually, and I have no problems admitting I used to think this way myself.

    [1]I think it can be justifiably said that O'Neill was a serial offender. I've got an old Halliday & Resnick with an essay of his breathlessly informing the student that it's only a matter of time before "magnetic flight" flight becomes a reality, by which he means it's "only a matter of time" before the dominant mode of transportation will be magnetically levitated trains shooting through a vast network of underground evacuated tunnels and moving so fast that the passengers will be in free fall - oh, and that this will be cheaper than any other type of transport. Sticking to his tried-and-true stump speech, he than says that while this may seem like the product of a super-science centuries in advance, we have the capability to do this right now (in 1974).

    Of course, this exposition is entirely forgivable; after that windup he goes on to say that the physics of magnetic flight depend on Maxwell's equations - the equations you're going to be learning from this book in just a few seconds, boys and girls :-)

    152:
    However, in the microwave region superconducting mirrors can be far more efficient than the best optical ones.

    Heh, I got your magic piston propulsion right here: Use a linear accelerator to shoot high velocity grapeshot at your spaceship. The spaceship then catches the grapeshot, decelerating them with an arrangement of magnetic loops. Using the electricity generated by grapeshot moving through a magnetic field, an on-board linear accelerator flings them back home where there are caught by the same regenerative braking trick. Rinse and repeat, using electricity from the braking cycle to power the home accelerator. At this point, the only additional energy you need is to make up for losses within the system.

    Hey, "it's only a matter of time" :-)

    153:

    This has been an interesting discussion, but from a Fermi Paradox POV aren't both of the communication strategies posited equally invisible to the observer away from the beam path?

    Readers who enjoyed "Mother Earth, Motherboard", as linked to above, may also wish to check out In the Kingdom of Mao Bell (or, "What I Did During My Researches for The Diamond Age") and In the Beginning Was the Command Line (now a little out of date, and in any case I can hardly believe anyone reading this blog hasn't memorised it, but anyway...).

    154:

    Yes. GEO is not that far away, ca. 40,000 km, and there are sats in GEO that do just fine with less than 100 m antennas and hand-held radios/phones. See Thuraya and the AN/PRQ-7 CSEL.

    155:

    Lots and lots of physics here, and that's very nifty. However -- and Charlie, I'm talking to you here too -- this is a really good premise for a short story or two, perhaps even a galactic-scale space opera...

    If you combine some of your virt reality ideas (ala Accelerando) into it, where some of the payload being transmitted are virtualized intelligences, and you add a diplomatic component to the mix also.

    Just a quick idea from an ardent new fan who is rapidly acquiring as many of your published works as are available... Keep up the awesome work(s)!

    156:

    Just thought of another twist, apologies if somebody else has already mentioned it (scanned the comments but didn't see any obvious reference) --

    Add to the payload an 'ansible' entangled comm system (courtesy Orson Scott Card and other sci-fi greats) and all of a sudden, your bandwidth with respect to time dramatically jumps. This assumes that quantum entanglement can be maintained across interstellar distances. Not a given, but nothing in standard model or any of its competing models prevents this...

    Back to incorporating it into story ideas, with an ansible point, one coule construct in-situ a t-gate ala 'Glasshouse' (assuming the probe can do so, piggy-backing on the entangled core of the ansible), and you've got your physical link (via remote copy).

    Interstellar diplomacy / colony prep via advance team (with trailing slow-boat) / invasion via subterfuge... take your pick.

    Put an ansible or two in orbit around each developing intelligence world, monitored by a paranoid ET civilization, that deliberately fosters 'religion' and actively meddles in politics / socialization to prevent militaristic xenophobic civilizations from ever breaking orbit...?

    Sorry, I'll stop now...

    157:

    When you've been here a bit longer, you'll begin to realise that part of what Charlie does on this blog is bounce ideas off his reading public. So you shouldn't be entirely surprised if something vaguely connected to this does turn up in a year or two (depending on publication schedules).

    (Bear in mind the maxim that knowledge is money.)

    158:

    Ahem: Minimum 18 months, more typically 2-4 years. (I hands 'em in when I finish 'em; a year later they show up in print. The time for brainstorming is before I writes 'em; and the writing process can take anything between four weeks and five years.)

    159:
    This has been an interesting discussion, but from a Fermi Paradox POV aren't both of the communication strategies posited equally invisible to the observer away from the beam path?

    Well, yeah, and I think this observation is what characterizes the riffing here, namely that what we see is in perfect accordance with what you would expect to see even if intelligent life turns out to be relatively common and the lifetimes of their hi-tech hi-energy civilizations can be measured in the millions of years.

    What's become apparent - to me at any rate - is that a lot of discussions concerning the resolutions of Fermi's paradox are really at root discussions about the real-life technical options that are actually available to a typical representative of this tribe. As opposed to, say the wild-eyed speculations of various and visionary sf writers :-)

    And my sense is that - damn me for a mundane - not only are Earthicans rapidly approaching the upper limits to what is physically possible in this universe (that's just the Singularity Bop, or possibly the Singularity Hustle), but we're already pretty close to those limits as well. That is, while I'd be surprised if we actually hit those limits in the next 100 years, I'd be equally surprised if we didn't reach them in the next 1,000.

    Since this opinion seems to be met with some hostility on occasion, let me be perfectly clear on this one: I'm fairly certain that we already know those limits allow some form of sustained interstellar exploration out to several hundred or thousand light-years, or even across the length and breadth galaxy for those who have the sitzfleisch. It's just that in the default case those activities - contrary some implicit and unconscious assumptions - don't have any signs that are easily detectable let alone obvious from our vantage point here on Earth. No easy ftl for exploratory vessels. No, not even if 'easy' means 'powered by annihilating kilograms of antimatter every second' :-) No system of traversable wormholes made stable at the cost of a trillion trillion joules expended each and every second for each and every gate in the network for centuries and millenia on end. No Dyson spheres capturing every erg of output from the primary to meet the energy requirements of the local yabos. No species physically colonizing other systems at stl speeds and eventually occupying billions of planets over the course or millions of years. No crisscrossing beams of interstellar chatter which are even remotely likely to be intercepted by the new kids on the block just by random chance. No asteroid belts littered with the debris of a million automated investigations initiated long ago and far away. And of course, no examples of local life that can only be explained by positing a non-local origin.

    Off the top of my head, these are the sort of things people mention or seem to be thinking of when they talk about lack 'obvious' evidence, but I'm guessing there are a few other categories of obvious that I've missed. Anybody have any other examples for other types of evidence?

    160:

    To put it another way (assuming anyone is still reading this thread), the fact that we see no signs of other intelligent life (let alone the honking big hit-you-over-the-head-obvious signs we should be seeing) is not evidence that such life rare or short-lived; it's evidence - and very good evidence by the very nature of Fermi's paradox - that the upper bound for what is technically possible for any civilization no matter how old or advanced it may be is a lot lower than what we once thought it was. Think of this type of reasoning as employing a sort of reverse principle of mediocrity; in this case it is not ourselves that are assumed to be typical, it's all the other hypothetical alien intelligences that are assumed to be typical.[1]

    An odd bit of psychology - the few people I've test-driven this one for seem to be a bit disconcerted by the conclusions. In fact, I got the distinct impression that given their druthers they'd much rather live in a universe where the only known intelligence was human if that meant they got to keep their fusion rockets, their space colonies, and all the rest of their ubertech kit as opposed to a universe was where intelligent life was common but technically unable to get presence in space much beyond a few dozen token astronauts briefly visiting the companion satellites of their planet.

    [1]Shades of the Campbellian trope that when it comes to intelligent species Humans are not your typical Grrf Blltnik. The fact that not one of those millions of extraterrestrial civilizations have ever been able to do the interstellar thing despite having a million-year head start on us in no way suggests that the fundamental laws of this universe might forbid any such peregrinations (and thus imposing an intolerable constrain on Humans as well.)

    It just means it's up to Arcot, Wade, and Morey (or possibly Dicky Seaton and his gang) to come up with some sort of ftl drive over the weekend so they can free those sentients in all their trillions from the prison of their racial shortcomings. Yeah, I know :-) But I've seen that trope used as recently as sometime in the 90's and by none other than Charles Sheffield, a guy I wouldn't normally associate with that sort of thing.

    161:

    Well, if one is in a Lovecraftian mood, one could consider mass extinctions as evidence of interstellar colonization.

    I've been playing with the idea of galactic civilization as crudely analogous to a metapopulation, or perhaps to a vernal pool. In either case, galactic civilization consumes so many resources, so fast, that it can't exist on any one planet very long (decades to a few centuries). As the planet runs dry, galactic civilization moves on, leaving the planet to regenerate for, oh, another 10-50 million years. Given enough habitable planets and easy ways to get between them (dimensional gates, living spaceships "falling between the stars," etc.), a galactic civilization could persist indefinitely, but only on a few planets at any one time.

    Given how we've managed to mobilize a huge amount of biomass (and the associated chemical elements mined out of the Earth) and strung up a world-spanning network for moving energy and materials around rapidly, I'd say we're ripe (literally) for a visit from this type of civilization, assuming they know about us.

    In other words, the stars are almost right. Cthulhu ftaghn?

    162:

    fmacay @ 153 That link re Shenzen in 1993 - how quaint!

    See also: HERE The foreign journos have made it to this village, and erm, interesting.

    163:

    Well, I was trying to be vague. I'll try 'year or three' next time to better communicate that.

    164:

    One thing where we are nowhere near the limits is efficient computing. We are factors of trillions away from the limits, or more. One might also presume that is how far we are away from the "ultimate intelligence"

    165:

    Of course, one might also argue that our computer systems are also on the order of trillions of times less powerful/efficient than the human brain, which we already have several good examples of.

    By that standard, a factor of merely 10¹² will only get us to...

    166:

    I've been playing with the idea of galactic civilization as crudely analogous to a metapopulation, or perhaps to a vernal pool. In either case, galactic civilization consumes so many resources, so fast, that it can't exist on any one planet very long (decades to a few centuries). As the planet runs dry, galactic civilization moves on, leaving the planet to regenerate...

    I'm not sure if this really holds water for a whole civilization; if they have the means to travel easily to new star systems they should have the means to mine out the lifeless rocks where-ever they happen to be already.

    On the other hand, it would be a really cool story to read about some complacent medium-tech planet suddenly visited by a horde of spectacularly post-Singularity tourists, like a Tinker enclave with starships. Come to think of it, I'm sure someone already has written that novel...

    167:

    Scott-Sandford @ 166 A & B Strugatsky Roadside Picnic errr ...

    168:

    It's not about potential technical capacity per se, it's about high-grading, which is kind of what western society does already. Why terraform a space rock, when there's some defenseless planet out there with a jungle to cut down and arable land for plantations, especially if both are equally easy to get to?

    In fiction, we've done pretty that very thing repeatedly, and no one bats an eye. I just looked at what's happening today, with our highly productive and highly fragile global ag/industrial systems, and figured that the only way to maintain such a civilization on an interstellar scale is to move to a new planet every hundred years or so. Certainly, if the aliens are scaled to Cthulhu's size, that's about the only way they could maintain a civilization for any amount of time.

    169:
    Why terraform a space rock, when there's some defenseless planet out there with a jungle to cut down and arable land for plantations, especially if both are equally easy to get to?

    That's sort of the flip side of my riff on why we can't see any signs of alien intelligence. They don't know we're here, or if they do, don't think it worth the effort to contact us. But once we're advanced enough (and foolish enough) to attempt to contact them, they'll be here in a galactic New York minute to take our stuff. Why not? A lot of sf on up to the 50's used the premise that once we got cheap ftl we'd use it to go out and grab their stuff. Everybody wants to hit the road to find the Good Land. Nobody wants to put in the sweat to actually cultivate one.

    Btw, is it only a coincidence that metapopulations and vernal pools figure heavily into the plot of a certain novel by Vernor Vinge :-)

    170:

    I'll admit my ignorance: which Vernor Vinge novel?

    As to why he knows about vernal pools, the campus he used to teach at (SDSU) was built over a wide swath of vernal pools, and the biology department there studies them. I live near vernal pools, and they are certainly inspirational for alien worlds and plots.

    171:

    "Of course, one might also argue that our computer systems are also on the order of trillions of times less powerful/efficient than the human brain"

    It is generally believed that this is not so. Estimates for human brain processing capacity vary from around 1 to 1000 petaflops. http://www.transhumanist.com/volume1/moravec.htm

    172:
    One thing where we are nowhere near the limits is efficient computing. We are factors of trillions away from the limits, or more. One might also presume that is how far we are away from the "ultimate intelligence"

    I don't know what you mean by "ultimate intelligence". Assuming we eventually reach the limits for your value of "ultimate intelligence", how would this change the way the night sky looks?[1]

    Oh, btw, I found the answer to your question about the two light sails; it turns out that the velocity in m/sec is given by v=10^(8/3)*Sqrt[10^(2/3)-1/(x+0.01)^(1/3)], where x is the distance in meters from the initial position. Turns out your sails reach 80% of their final velocity in less than a quarter of a meter, not quite 90% over the first meter, and a tiny bit over 91% after two meters. I'll let you do the differentiation :-)

    [1]YASFT (Yet Another Sf Trope) - a lot of stories have been written over the years where the cosmological observations the theorists have been tying themselves into knots to explain turn out to be artificial in origin. Those anomalous lithium-7 counts? Relics of high civilization from the Quark Era.

    173:
    I'll admit my ignorance: which Vernor Vinge novel?

    'Deepness'.

    174:

    Late to the party but why to read the link our host posted:

    "we take wires completely for granted. This is most unwise. People who use the Internet (or for that matter, who make long-distance phone calls) but who don't know about wires are just like the millions of complacent motorists who pump gasoline into their cars without ever considering where it came from or how it found its way to the corner gas station. That works only until the political situation in the Middle East gets all screwed up, or an oil tanker runs aground on a wildlife refuge. In the same way, it behooves wired people to know a few things about wires - how they work, where they lie, who owns them, and what sorts of business deals and political machinations bring them into being. "

    175:

    Oh and if someone has a good link to something more recent about the current state of the physical Net, I would be grateful.

    176:

    I loved the Mao Bell piece. The ending is a fascinating little time capsule showing what things in China looked like in the early 1990s, even to someone as bright and knowledgeable as Neal Stephenson.
    At least as of 2011, his forecast is well wide of the mark.

    177:

    "I don't know what you mean by "ultimate intelligence". Assuming we eventually reach the limits for your value of "ultimate intelligence", how would this change the way the night sky looks?"

    Ultimate intelligence? In one case, the limits of processing power you can cram in a Human head size with around 20W of power supply. Probably around 10,000 what we have now. At the other end of "ultimate" we might have Matrioshka Brains http://en.wikipedia.org/wiki/Matrioshka_brain

    178:
    Estimates for human brain processing capacity vary from around 1 to 1000 petaflops.

    Those estimates are rather... speculative, to say the least. With 1011 neurons and 1014 synapses, each neuron being somewhat akin to a small DSP in its own right... Let's just say that actually emulating a human mind in computer form would likely take a lot more power than that, but perhaps a more efficient scheme is possible and that number is plausible.

    Even so, the human brain consumes about 20W, so we get on the order of 51013 to 51016 operations per watt for the brain.

    Let's compare to a modern supercomputer: http://www.green500.org/lists/2011/11/top/list.php

    The most efficient available appears to be the IBM BlueGene/Q, which gets approximately 2000MFLOP/W, or 2*109 operations per watt.

    So using those numbers, the brain is somewhere in the thousands to millions of times more efficient.

    179:
    "I don't know what you mean by "ultimate intelligence". Assuming we eventually reach the limits for your value of "ultimate intelligence", how would this change the way the night sky looks?"
    At the other end of "ultimate" we might have Matrioshka Brains

    But we don't, in fact, see such things in the night sky whereas theory says we should and with a rather cursory inspection at that. So what's wrong with this picture? Traditionally this absence is seen as evidence that life - at least, intelligent, tool-using life - is quite rare and that one or more terms in the Drake equation have to be drastically revised downward. Hence the proliferation of Great Filters; simple life is common but multicellular life is not. Higher life forms are common but intelligence is not. Intelligent life with advanced technology arises frequently but the very traits necessary for that developmental path also guarantee it's destruction a mere few centuries after technical mastery is attained.

    And so on and so forth.

    My point is that the other way you could look at the lack of any easily detectable alien presence is to suppose that data actually has nothing to say about the prevalence of intelligent life in the universe. Rather, it should be taken instead as evidence that the technological ceiling for any race no matter how clever or advanced is actually a lot lower than we commonly (and to be blunt, wishfully) suppose. At least when it comes to the sort of technology that enables interstellar exploration and Matrioshka brains :-) Greg Egan had a bit on this in Diaspora where the supposedly primitive inhabits of a five-dimensional star turn out to be much more advanced than the polity that first contacted them.

    180:

    "So using those numbers, the brain is somewhere in the thousands to millions of times more efficient."

    True, but the brain is still a long way from maximum theoretical efficiency, even without using reversible logic.

    181:

    Which is why I think we are living in a simulation within such a Brain. Or probably several levels of simulation down from that. It resolves all the questions.

    183:

    I don't think neurons are the equivalent of a DSP; their logic is much simpler than that. It's just that their implementation is not at all like the way we build digital logic, so to simulate the operation of a neuron we have to use a DSP. I'd say each synapse is the equivalent of at most 2 bits of logic, which puts the total number of operations in the brain at the low end of the range you cite, at most.

    184:

    Er, no. Firstly, synapses are analog devices. They look digital, but there's some ambiguity over whether they actually are digital, or analog with a threshold above which they switch behavioural modes. Or something else. Secondly, you may have more than one type of neurotransmitter receptor per neurone. Thirdly, ambient hormonal effects in the inter-cellular medium that perfuses the brain. Fourthly, there are direct microtubule connections between cells that we're only just beginning to understand (in the past five years or so) and which may be big enough to permit the transfer of neurotransmitters.

    Basically, we don't know enough about neuronal microanatomy, except on the crudest scale. Yes, we can dissect some big neurons and play with them and they behave fairly predictably. But we don't know a lot about how ensembles of them operate in the rather different tissue of the human brain.

    (And I didn't once have to invoke Penrose's quantum loopiness!)

    185:
    Which is why I think we are living in a simulation within such a Brain. Or probably several levels of simulation down from that. It resolves all the questions.

    Yes it does. So why am I not satisfied with your explanation? Otoh, all we need to do to falsify my hypothesis is to go out there and see what's happening. And while I have my suspicions about just how difficult space travel really is, the one thing we can all agree on, optimists and pessimists alike, is that we all really want to go :-)

    This all goes back to one of Charlie's early specials, btw, The High Frontier Redux (and incidentally, what prompted me to start following this blog.) Bearing in mind how often you hear some riff on "All you need are shrimp and algae!" from the pro-development folks despite repeated exposure to the facts, maybe the better question for them would be:

    "If space travel is so easy, how come we don't see any aliens in our back yard?"

    You know, all those guys who've had millions of years to get it right ;-)

    186:

    OTOH, there's a lot to be said for working out what it would take to duplicate a well known neural function eg the retina, and extrapolating from that. It is also very likely that most of what the brain does is there to keep it alive, with the kind of processing which interests us being a secondary function. For example, is the pseudo analog nature of the synapse necessary or simply there because Nature could not manage full digital?

    187:

    "So why am I not satisfied with your explanation?"

    My explanation is just as testable as pretty much anyone else's when it comes to the Fermi Paradox. If we build gigantic artilects, then starship and populate the galaxy Von Neumann style then I would say the Simulation Argument is falsified. However, long before that point I would expect this level of simulation to "collapse" as it ran out of computing resources far sooner than one might expect. Another version of the Singularity.

    188:

    That's how Hans Moravec went about getting his estimate of the computational complexity of the brain.

    Trouble is, the retina is a rather specialized structure with a rather specialized role, and it's just not that complex compared to other neural structures we can see. So this falls under the category of "looking for the missing keys under the street light" solutions ...

    189:

    Maybe, but there is also promising work with brain prostheses eg artificial cortex and hippocampus.

    190:
    My explanation is just as testable as pretty much anyone else's when it comes to the Fermi Paradox. If we build gigantic artilects, then starship and populate the galaxy Von Neumann style then I would say the Simulation Argument is falsified.

    Well, that's the plan. Otoh:

    However, long before that point I would expect this level of simulation to "collapse" as it ran out of computing resources far sooner than one might expect. Another version of the Singularity.

    Big "Huh?" What leads you to believe that this hypothetical simulation would run out of resources? Particularly in light of the fact that the best argument for strong AI (that I know of at least) is that in the worst case scenario you could simply program a computer to keep track of a simulated human being at the atomic or even subatomic level?

    I'll grant you that in the end this is every bit as speculative as the argument that one doesn't need to model individual synapses or even individual neurons to create a passable human-equivalent consciousness. But somehow, when your simulation is fine-grained enough to be keeping detailed track of individual electrons, the reasoning seems a bit more convincing. Sue me :-)

    So given that your hypothetical simulation is running much deeper than that (all the way down to the Planck scale?), in what sort of situations, precisely, would you expect it to break down? And why?

    Although . . . has anybody written a story where it turns out that yes, we are being run in a simulation and that because of resource constraints, whoever's responsible is doing so using the smallest possible model consistent with human level consciousness/intelligence?

    That classical argument for strong AI doesn't look so relevant any more. Otoh, now Penrose gets to claim he was right all along :-)

    191:

    "What leads you to believe that this hypothetical simulation would run out of resources?"

    Because we are probably not talking about one simulation but a vast number. And the overwhelming majority would be "cheap" sims, with just enough resources allocated to simulate a certain number of people (probably not 7 billion) at the neural level, and whatever scenery is required to fool them. Try turning the moon into computronium in one of those and the train hits the buffers almost instantly.

    As for why there might be a vast number of low level sims... if I make it through to the point where suitable computing power is available I will bring back my dead relatives and friends in a level of detail that is indistinguishable from the "originals". And I expect them to do the same for me.

    192:
    Because we are probably not talking about one simulation but a vast number. And the overwhelming majority would be "cheap" sims, with just enough resources allocated to simulate a certain number of people (probably not 7 billion) at the neural level, and whatever scenery is required to fool them.

    Again, why? Atoms are atoms, whether they're assembled into a human, an ostrich, a shoal of fish, a tree, or even just a bunch of dumb rocks.[1] No matter how complicated the emergent behaviour. So why do you assume that your hypothetical simulation is occurring well above the atomic level? You seem to be positing not only a generic simulation, but also certain ground assumptions as well.

    [1]IIRC, the basic rules of the Autoverse in Egan's Permutation City didn't give you the emergent behaviour of anything so complicated as a single atom until you went up several levels of complexity encompassing millions of fundamental cells.

    193:
    But suppose you're transmitting some sort of intelligent agent that can operate autonomously once it's been downloaded into the proper substrate.

    Oh my god! You want to send them a virus! We're going to die!

    194:

    "Again, why?"

    I already told you why. In the near term it will be a method to resurrect the dead. If an AI can reconstruct me to the level of detail that has me writing these words, here, at precisely this moment I would say I'm back.

    195:

    No, I'm asking you why you think this simulation is going to crash? Your rather brief answer seemed to have a number of other fundamental assumptions embedded in your scenario. Read my question again for a few of them.

    196:

    The simulation would crash because most would only be set up to do a high level (neural) sim on a limited number of people plus environment. If we started building Jupiter Brains we would find out that there is not enough processing power left in the sim to accomplish that. It would become very obvious that the "Laws of Nature" had ceased to work. The only way of maintaining the sim at that point would be for whoever is running it to feed in Jupiter Brain quantities of processing power - which I think they would be unlikely to do.

    197:

    Let me rephrase: why do you think this simulation is so small and coarse-grained, and why do you think what high level of modelling it is performing is on a group of such a limited size?

    Doesn't this strike you as yet more assumptions piled on top of your original one?

    198:

    And Moravec did assume that treating neurons as digital would be sufficient to accurately model and reconstruct a particular brain, i.e., the concept of mind upload implicitly depends on that. At this point I don't think we know whether the neural actions that are important fo the operation of the brain are analog, digital, or a hybrid of the two.

    You're right that we don't know very much about how neurons work in general; in fact we are still discovering new types of neurons, and finding out things we didn't know about the types we have studied. But however they work, I think that trying to understand how the brain functions by looking at individual neurons, or even at small networks of them, is futile.

    The brain is composed of very many interconnected circuits of neurons, many of those circuits are connected recursively, and cognitive functions seem to be reactive in the sense that they are modulations of waves of activity that flow from the afferent nerves (including all the proprioceptive senses) to the Central Nervous System, through the brain where they are affected by memory and cognition and affect them in turn, and then back out the efferent nerves and so to the muscles and other effectors. The changes in the outside world and the boundary between the body and the world are then fed back in through the senses. We need a systems theory of the brain and its subsystems before we can understand how it works.

    199:

    The simulation would crash because most would only be set up to do a high level (neural) sim on a limited number of people plus environment. If we started building Jupiter Brains we would find out that there is not enough processing power left in the sim to accomplish that.

    I think you have not read up on the simulation hypothesis as much as you might have. Processing power, per se, cannot be the limiting factor in a pure simulation; as more computing is required, the simulation will run more slowly - as internal time compares to external time. This will not be evident within the simulation, of course; modeling one second's worth of action may take a millisecond or a week, and within the simulation there's not necessarily any way to tell.

    This assumes the inhabitants - us? - are in fact simulated entities themselves, rather than brains in vats, video game players, or what have you; obviously, observers outside the computer can see the difference.

    The limits on simulated reality, aside from engineering and economic issues, are several. The simulation must be smaller than the outside system (since reality-one must exist within reality-zero's memory constraints), probably less complex (simplifying hard-to-compute things like weather), and probably less detailed (for unobserved or featureless things like deep ocean waters).

    200:

    "why do you think this simulation is so small and coarse-grained, and why do you think what high level of modelling it is performing is on a group of such a limited size?"

    Most simulations will be course grained simply because it will be far cheaper to run than a fine grained sim. Why run a fine grained sim if the results you want can be obtained from coarse grain?

    The other question concerns why such sims would be run. The obvious (to me) near term answer is reconstructing the dead. I'm not talking about whole world simulations run in the year one million AD, but a vast number run within the next century or two. Designed to bring back people who are still largely part of living memory. How many people would like to bring back their parent and grandparents? That has to be in the millions if not billions.

    It is those numbers that make this world much less likely to be "real".

    201:

    If we have to go analog then the complexity goes up by two or three orders of magnitude. Which means that instead of matching brain computational capacity now, we will have to wait for exascale computers in another 6 years. Not a deal breaker.

    202:
    "why do you think this simulation is so small and coarse-grained, and why do you think what high level of modelling it is performing is on a group of such a limited size?"
    Why run a fine grained sim if the results you want can be obtained from coarse grain?

    BINGO! How do you know what results these outside operators are looking for? And how do you know what can or cannot be done with the virtual machine they're using? For someone who thinks there's a nontrivial probability that they're just code running on what may well be "a little device sitting on someone's table", you certainly seem to have gotten ideas above your station :-)

    There's a lot of different directions you can take the fundamental concept of course. One I've already mentioned is that you're entirely correct regarding their frugality and the world we're in - incorporating QM and operating at the subatomic level with electrons, quarks, bosons, etc. - is the smallest, coarsest, cheapest world consistent with strong AI agents like ourselves.

    Another possibility is the Zones of Thought scenario; perhaps the intrinsic properties of the exterior realm are such that what our minders call "consciousness" (not necessarily intelligence) is a much finer stuff than our own. Maybe they're just trying to find out how coarse the laws of this toy universe can be made before consciousness becomes an impossibility no matter how powerful the computational resources you're trying to brute-force into sentience.

    And so on and so forth - you can speculate endlessly about what these hypotheticals "want", but you're doing so in a factual vacuum. Hey, maybe this is all for some other completely unrelated species benefit and you're only an NPC. Spare me your mechanical insistence that the lights are really on inside; I'm sure that from your own perspective that little dab of awareness you possess seems like the genuine article. But the fact of the matter is you've been given just enough autonomy to make a convincing Orc for the real ancestors.

    Can you tell me with a straight face that your speculations regarding the motivations of the sim runners and the resources available to them are more plausible or legitimate than my own throwaways? That's why this sort of thing never really goes anywhere.

    203:

    "Can you tell me with a straight face that your speculations regarding the motivations of the sim runners and the resources available to them are more plausible or legitimate than my own throwaways? "

    I think the motivation I outlined - resurrecting the dead - is definitely the most plausible simply because I, and probably millions of others would do it if possible, as soon as possible. I'm talking about people alive now running those sims. Our children, for example.

    204:
    I think the motivation I outlined - resurrecting the dead - is definitely the most plausible simply because I, and probably millions of others would do it if possible, as soon as possible. I'm talking about people alive now running those sims. Our children, for example.

    Isn't this just an argument from incredulity? "I can't think of a better reason for running sims than bringing our loved ones back"? The fact that you can't imagine those reasons doesn't imply their nonexistence now, does it?

    Perhaps if you were able to get your sim people up in running in the next decade or the next century you might have a point. But if it turns out that sort of modelling is really, really tough and you really do need to go all the way down to the subatomic level to passably simulate a human being, aren't we talking about something rather farther into the future?

    Note that I'm not saying the scenario you're imagining is incorrect. I'm just saying I don't see how assuming we're living in a simulated construct gives us the deductive leverage to make any further inferences about what that outside world is like. For all we know it's a 23-dimensional universe out there and we're just an old kid's book that's still in the library three houses later. Or maybe it's just two dimensions; the point is, there's no way to know or find out on our own anything about this larger world, even if we grant it's existence and our status in it.

    205:

    This whole argument reminds me of Pascal's Wager, which is still being debated today, despite the fact that it's almost totally incoherent. The Wager assumes that you know enough about the nature of God that the question of God's existence reduces to 1) either God has the nature you ascribe, or 2) God doesn't exist. Not at all a good bet; suppose the actual nature is that Gog and Magog are the real deities, or for that matter, his Noodly Majesty? Or that Satan is actually the Big Boss?

    Trying to guess the nature and motivations of beings who have created us as simulations strikes me as a similar exercise to figuring out the nature of God, and just as hard to test.

    206:

    "But if it turns out that sort of modelling is really, really tough and you really do need to go all the way down to the subatomic level to passably simulate a human being, aren't we talking about something rather farther into the future?"

    Yes, but there are no indications we need to go sub molecular. It may even be possible to eliminate most of the complexity of neurons. That's why there are several whole brain emulation projects looking for definitive results within the next decade. It would seem the raw computing power is almost there.

    And if there are other motivations beyond the ones I have outlined, that just makes the number of simulations even larger, and the probability that this is the real world even smaller.

    Anyway, I believe that there are multiple nested simulations, not just "one above". If so then resurrecting the dead becomes even easier because we may have access to the next level's records

    207:

    Just wanted to thank everyone for some of the most thought provoking discussion I have seen, just about anywhere. Amazing insight and thoughtful discourse. Cheers and thank you.

    208:
    "I think the motivation I outlined - resurrecting the dead - is definitely the most plausible simply because I, and probably millions of others would do it if possible, as soon as possible."

    How, may I ask, will running simulations of things, even smart and interesting things, result in the outcome of resurrecting the dead?

    If you have a copy of the fundamental mental state of the dead person, then you don't need the simulation to resurrect them. You in essence have a copy of them on a floppy disk. You just need a computer to wake them up so you can talk to them.

    If you don't have a copy of them, then no amount of random searching is going to bring them back. This should be kind of clear from the size of the random space: if a person could be represented by a mere 1 million bits of description (hardly a reasonable depiction of a person), then you'd have 21000000 people to search... And you have no way of knowing which one is the right one, either.

    Of course, 21016 would seem rather more plausible... Wolfram Alpha is nice enough to tell me that that number is 3⨉1015 digits long.

    209:

    Hush! Don't confuse a religious issue with logic.

    You have to realise that only the correct one will have that intangible essence of being the right one. For now, we can dub that essence the 'soul'.

    Actually, perhaps 'they' don't actually mind. Perhaps they're just exploring the phase space bit by bit, no matter how vast it may be, and it doesn't really matter whether we are 'the dead' (people whose patterns have existed before), or not.

    210:

    If there are nested simulations, then the state of our "dead" is already likely recorded somewhere.

    OTOH, if someone in the future had my DNA, medical records, photos, history etc plus everything I have written then a reconstruction should be possible. If the reconstruction is writing these words, at this exact time, in this blog, I would say the reconstruction is almost totally accurate. Certainly as accurate as "I" am from one day to the next, after sleep.

    Since I actually believe I am a reconstruction the accuracy does not bother me too much, since I am who I am regardless.

    211:

    In the multiverse we all explore all possible phase space and histories.

    212:

    [Coming in very, very late to a very interesting post and discussion...]

    We're in a region of million-degree, ionized, low-density hydrogen gas.

    True, but Charlie's proposal was set up as a generic possibility for advanced civilizations, so it's reasonable to use the mean density of the interstellar medium to estimate how often your message package runs into hydrogen atoms or ions. If you want to focus on a much more specific scenario involving the Solar System and its immediate neighbors (e.g., communicating with Alpha Cen), then the Local Bubble/Chimney would be relevant.

    213:

    Regarding that link, so the Local Fluff is expected to get to us in 50,000 years, when the stars are right?

    I thought the Local Fluff had already reached us.

    214:

    What worries me about the simulation argument is the implicit precedence given to human existence.

    What if the objects of the simulation are bacteria (we're just a boring high-level emergent phenomenon of symbiotic prokaryotes ganging up together inside single cell membranes and swapping messages)? Or mammalia in general (because warm-blooded life is important and, oh, as carnivores we're in the shit)? Or some variety of posthuman which finally meets the criteria for being interesting to the ancestor simulators?

    Why should we be the special sparkly ones?

    (You may take "Missile Gap" to be a stab in this direction.)

    215:

    It depends who is running the simulations. First, we have to get out of the habit of thinking about "the" simulation. There are likely to be zillions of them, run by just about anyone or anything you can imagine from the year 2030 until the galaxy runs out of energy in 100 billion years time. The ability to run whole brain emulations in realistic scenery will probably be with us in less than 20 years (optimistically) and certainly less than 100 (pessimistically). Barring those stay-at-home aliens, it is us doing the sims. And who is our favorite subject? In the case where its bacteria being simulated I imagine we will all be deleted when the program ends. In which case we will never know. OTOH, we can only experience the ending of sims where we get the upgrade option. A kind of simulation argument version of quantum suicide (immortality). And, speaking of which, in the multiverse there must be a migration from "real" reality to simulations from a subjective POV.

    216:

    Jim Benford (brother of Greg) has covered a lot of this ground professionally, and presented the groundling's version at a WorldCon a few years back. From memory, the high points were a) use microwaves, not visible light frequencies b) accelerate ~real~ fast so the work gets done at short range and minimises the inverse-square losses.

    He's still working on the concept. Here's a recent link

    http://nextbigfuture.com/2011/12/james-benford-works-out-details-around.html

    217:

    Yep, he did an interesting presentation -- with video of some actual lab tests! -- at the 100 Year Starship conference.

    Specials

    Merchandise

    About this Entry

    This page contains a single entry by Charlie Stross published on December 10, 2011 9:01 PM.

    Mercury, Retrograde was the previous entry in this blog.

    Why I don't use the iPad for serious writing is the next entry in this blog.

    Find recent content on the main index or look in the archives to find all content.

    Search this blog

    Propaganda