Back to: Typo hunt: SATURN'S CHILDREN | Forward to: Horses for courses

This week's sense of wonder hit

Wolfram Alpha is launching soon. Developed by Wolfram Research, Alpha is a knowledge engine with a natural language front end: it's designed to answer questions about domains of structured knowledge, much as search engines come up with answers about unstructured piles of data.

Paging D. G. Compton: the man with a camcorder in his eye (I wonder what the police will do about this? Especially if we add real-time broadband uploading? And it catches on?)

The 3D bone printer deposits tricalcium phosphate and polylactic acid in layers to build up a bone-like matrix, then bone marrow stem cells are used to grow osteoblasts which are washed into the scaffold. On then being cultured under the skin of a living animal (in trials they use mice) it matures into living human bone tissue. If they can get blood vessels to grow into the material, and layer muscle and other tissue over it, there may be hope for amputeees.

(I'm trying to figure out a way to combine these news stories. How about you?)

97 Comments

1:

The bone printer is wildly cool. And just think of the body modification possibilities of being able to grow human bone in any shape desired.

I hadn't seen the Wolfram Alpha yet. Hm. It will be interesting to see just how well it works. I just don't see how it can compute the answers to factual questions. Those are look-up questions. Maybe it figures out the meaning of the natural-language question then looks up the correct fact? I'm curious how far it can extrapolate, and how well it can put together disparate facts.

And I must thank you: I was curled up reading Glasshouse this morning, thoroughly enjoying a lazy rainy couple of hours. That alone would be sufficient, but a throwaway line near the beginning inspired an entire novel plot! Not bad for about five words.

2:

Easy: A spy agency outfits an agent with a camcorder eye and infiltrates her into a cabal of foreign businesspeople. When she meets some industrialist type, the agent's handler consults Alpha: "Who is Yevgeny Kharkov and how much is he worth?" For high-net-worth targets, the handler maps the target's face and sends the file to the 3D bone printer, which creates a simulacrum on which the agency grows a reasonable replica of the target's face. They surgically implant the face on another agent, who then takes over the target's business interests. With a few such deep agents, the agency starts hollowing out national economies. Etc.

3:

How about...

Nasty accident with limb missing. Paramedics turn up and assess situation with a camera device (not necessarily implant) which uses visual recog and sensors coupled with that wolfram semantic doo-dah to determine victim's requirements in real-time (this step might be a bit flakey). Transmits that back to hospital bone printer to grow a nice fresh limb that's waiting for the victim when they get there. Grafted straight onto patient using future-voodoo-tech. Job done.

4:

Using tricalsium phosphate as a superstructure to grow osteoblasts on has been around for years, but using a 3d printer to make them to suit is cool.

5:

the bone thing...seems silly to stop at just using stemcells to complete the process. Could a creature just made of these generated bones be assembled?

6:

My parents' company sells a product called ACell, which uses material recovered from the substrate of pig bladders turned into a powder or impregnated on sheets. ACell recruits stem cells from throughout the body to the site of a wound, where the stem cells grow into the exact cells that were there before.

It can regrow cartilage in joints, corneas - I even saw a photograph of a cat whose foreleg was "degloved" in a car accident, skin and fur stripped right off the muscle, and after being covered with sheets of ACell it grew the skin and fur right back - the only scar happened because the vet fucked up the intersection of the ACell sheets and the edge of the remaining skin a little, so there was a small scar at the "border" between the old skin and the new.

7:

I suppose enter the camera-eyed chap's details onto a "crimint" database and then observe him closely and obviously whenever he attends anything public.

Can one sue for semiotic libel - if the crimint database has genint records in it then many people observing that a record exists will assume this to be an assertion of criminality?

I suspect the Palm/Grafiti lesson will be replayed with any "natural language" device, finding that it is easier to teach people to speak "properly" a restricted and constrained subset of %language than teaching computers to understand scrawls/the usual way people talk, like.

It'll be interesting.

8:

The bone printer sounds awesome, but I wonder if you could create something similar for teeth? Since people lose teeth more often than bone, because bone will repair itself in a way teeth won't, then there's massive potential for creating and implanting replacement teeth.

9:

The bone printer sounds awesome, but I wonder if you could create something similar for teeth?

http://www.ted.com/index.php/talks/juan_enriquez_shares_mindboggling_new_science.html

About 10 minutes in.

10:

How about elegant solutions?

Recent discovery of Crohn's Disease cure and new flu jab at singularityhub.com, plus the above, seems to say to me, well done researchers. Don't call us, we'll call you Mr. Vatican apologist.

Science, love it!

11:

"(I wonder what the police will do about this? Especially if we add real-time broadband uploading? And it catches on?)"

And don't forget to add to the mix the real time signing of every frame by a third party authority to prevent forgeries (better make it several different TPAs).

12:

Christopher @6: How can I put this... do I believe you?

13:

"3D bone printer"

The license fee will be prohibitively expensive. (For the bones, not for the printer. They'll probably throw in the printer for almost free.)

"(I'm trying to figure out a way to combine these news stories. How about you?)"

No, I'm not.

Oh, all right then. A knowledge engine, a man with a camcorder in his eye and a 3D bone printer walk into a bar...

The only thing I can think of is a modern version of The Cask of Amontillado, where a search engine millionaire is invited to the towers of Wolfram Research (didn't they use to employ vampires and such?). The paranoid Wolframs accuse their guest of industrial espionage, and decide to punish him with a 3D bone printer [NEEDS WORK]. Little did they know that their guest was indeed there to lift their ideas, and the camera in his eye slowly records the gruesome murder...

14:

Guy who had his mandible removed due to bone cancer had it regrown (they had deliberately left the actual jaw joints in place) in an experimental program in Toronto more than five years ago now; the framework was a one-off, not a printer mechanism, implanted under the skin of his shoulder blade, but it worked.

Couldn't give him back his teeth, but certainly an interesting proof of concept.

15:

The Wolfram Alpha announcement is certainly very interesting. I must put myself in the skeptic camp, but I look forward to hearing more details.

It's not just bone printers that are being developed. We've seen scaffolds for cartilage, kidneys and even hearts being developed that are infused with stem cells that create functioning organs. What is interesting to me is that biology has, to date, assumed that development will dictate final form. 3D printers may change that assumption. Computer modeling of organs might suggest more efficient structures that can be built and implanted. Runners with bird-like hearts?

As for eye-corders, is that fundamentally new? I see this as much as a benefit than a problem. Remeber the recent case of the BART officer ultimately caught on cell-phone videos committing murder? Police on the scene tried to confiscate cellphones. If everyone has tiny recorders recording their lives on super high density storage, then how can the police prevent observers of their behavior recording their actions? This is one area where citizens can fight back against police state abuse. Or maybe the police will carry RF jammers to stymie this?

16:

Alex, there's nothing special about bird hearts, apart from their size. Birdlike lungs would be more to the point, but problematic in a mammal (birds don't have a diaphragm). You also have to remember that tissues are quite dynamic. Superfluous bits of bone will gradually disappear in a healthy body, for instance. Even if the body might need them later (ask an astronaut).

17:

I'll believe Wolfram Alpha can do what is being claimed for it when I see it. I think a system which really could answer a full range of natural language questions would have to be a true AI.

18:

Wolfram is interesting, but how do we fact check it? With google you know the source of the information and can judge it's reliability from there.

What if people rely on it and it's wrong? Like HAL.

I'd say actual indisputable facts are actually few and far in between. Can you ask it what pi is? Also, who is Number One?

19:

"Some might say that Mathematica and A New Kind of Science are ambitious projects. But in recent years I’ve been hard at work on a still more ambitious project—called Wolfram|Alpha."

Holy cow!

As to "What about all other systematic knowledge" I've had the joy of discussing this f2f with Dr. Wolfram (at an International Conference on Complex System where he was a plenary speaker).

First, what is the topology of the ideocosm (Zwicky's term for the space of all possible ideas)?

Second, systematic search evolution from Lully to Leibnitz to Lady Lovelace to Wolfram... what's next?

Third, Dr. Wolfram's question to me: if the systematic automated exploration turns up something immensely valuable, who should own the intellectual property rights?

20:

As an ex photographer I see that the new law about photographing policemen is capable of being excessively enforced.

Police (and other officials) already attempt to stop photographers going about their business in public, having a new law to back them up, particularly one that addresses the issue as 'terrorism' seems to be a definite bad step, yet another in wht seems a long list...

Lastly, I find the language used by Stephen Wolfram smells of bullshit. The project might be great but it's couched in terms that I'd run a mile from if I heard them from someone trying to sell me something.

21:

Wolfram has IIRC a fairly impressive track-record of delivering... So we should watch with interest.

The embedded camra/recorder, can, of course ALSO be used by the police - as Charlie noted some time back. However, the ongoing attempts to stop public photography (not JUST of the cops) is getting some serious flak here - nowhere near enough, yet, but, sooner or later, there is going to be a high-profile case, which the fascists cops/CPS are going to LOSE. Then, of course, they'll try for a new, even more restrictive law, which will be struck down by "Europe" - specifically the EUCHR. Interesting. As for re-growth of tissues, THIS was released today - interesting.

22:

Wolfram delivered a reasonable symbolic algebra system, but he also delivered a pile of hype with regards NKS. I'm on the sceptic camp too.

23:

In the context of Wolfram Alpha, I'm surprised no-one has mention Doug Lenat's Cyc project. Cyc is going to do everything that Alpha is claimed to do and more, and has been going to do it for about 25 years now. (Is my cynicism showing?)

Recently I spent a few years as part of a business angel partnership, looking at start ups' business plans to decide which were worth investing in. Several "knowledge engine with natural language front end" proposals came my way. We didn't invest in any. I'd love to see one that worked, but I'm not sure if I even expect to see HAL 1000 before I die, never mind the 9000 series.

24:

It is time to name my computer. I choose "Joe."

My father is gonna love the Wolfram Alpha story. He's already got the two-way television watch, even if it's a little big for his wrist. Now all he needs is his goddamned flying car.

25:

Looking back over Charlie's previous blog I remembered the nano AV camera for military use.

How long do you think it will be before I can buy (at a mass market price) a camera drone that follows me around and transmits everything it sees to my iPhone/life recorder? Then my phone can transmit all its data to my home.

Because you could make one right now (albeit very expensive for a prototype).

Anyone up for investment?

26:

Why is it that faith healing never works for amputees?

27:

On the eye-cam, I think it is high time technology catches up with the original Cyborg novel. Not the silly TV series, but the Martin Caidin story in which that's exactly what Steve Austin has (instead of a magical hyperfocus eye).

28:

Chris L@16. Bird heart was a bad example. The point I was making was that we don't have to assume that evolution has resulted in optimal anatomy, so that there is room for designer organs that perform better. The shape of organs is determined by development, a process we may be able to significantly modify or bypass using scaffolds.

29:

A 'nanocam'? How do you get around basic optics? The Wolfram hype - I'd like to believe it, but let's see how it actually works. I don't think his claims will be that far off if you parse them carefully . . . but that's sort of like the (2,3) automata being 'proved' to be a universal Turing machine. Well, for some definitions its been proven, I guess. Anyway, wait and see, who knows? This is, incidentally, the sort of AI I'll think we'll be getting in the next hundred years. Something that could pass the Turing test in sharply delineated circumstances, but nothing that anyone would call 'real' AI.

The most significant thing I see on the list is formal legislation allowing the police to confiscate independent evidence. That seems to be happening not just in the UK, but here as well(please, no derisive remarks for that last comment.)

30:

ScentOfViolets: you're the first person here to use the term "nanocam" -- optics capable of resolving distant objects and fitting inside a human eye socket would indeed appear to be permitted by the laws of physics!

On confiscation of evidence -- I'll note that in France they recently passed a law criminalizing the filming of a crime. The ostensible motive was to ban "happy slapping", which is laudable, but it was drafted so widely that the folks who filmed the BART shooting in Oakland last year would be doing hard time under the French law (as opposed to the transit cop with the gun being under investigation for murder).

31:

"nanocam" != optics capable of resolving distant objects and fitting inside a human eye socket would indeed appear to be permitted by the laws of physics!

Eyeball sized optics are very macro. But real nanocam optics, say the size of the order of wavelengths of light would not work and dimensions of a few nm would most definitely not work using optical approaches.

A nanocam could easily fit into a human hair - so your head could trivially support a lot of devices.

At the end of Bod Shaw's "The Light of Other Days" the government is seeding the US with trillions of micro glass beads to record events in the "slow glass". While that ominous future assumes only the government would have access to the beads, one where the citizens can harvest them too would help balance the power.

The French legislation mentioned seems ultimately futile. No newsreels of crimes to be allowed? And how might they stop posting of crime video on offshore servers anonymously? Make it a crime to view crime video?

32:

Even if Wolfram Alpha works, I'm not enthusiastic about the creation of a "mathematical oracle" that people just have to trust knows what it's doing.

It'd be much more useful to define "web-math" - i.e. the HTML equivalent for math - letting people easily create and publish data and equations in the open for others to copy/paste/evaluate/correct/extend/apply.

NOT just a spreadsheet - it should make all equations and relationships and data sets fully visible. (The greatest flaw in the spreadsheet paradigm, IMO - hiding the assumptions inherent in the calculation structure - it made spreadsheets much more of a read-mostly "trust me instead of taking the time to understand" medium.)

The basic web-math engine would support easy creation of meta-equations - equations operating on and transforming equations, building on built-in arithmetic, logical and symbolic primitive operations, as well as previous meta equations. So someone could define a set of algebraic transformations, then build on that for symbolic derivatives, and others could improve or correct those.

But the base web-math engine wouldn't even try to enforce "legal" transformations. Instead, provide a means of linking/authenticating/differencing copied equation/data sets so people can quickly check that a system of copied equations or calculations or data is exactly the same as that originally published by a particular source - or see the specific changes someone has overlaid. If someone thinks they've spotted a bug in the algebraic transformation meta-set, they could publish a fix as an overlay on the authenticated rule set, along with examples to demonstrate the bug and their proposed solution.

Likely a non-profit "webmath foundation" would quickly arise that would maintain the set of common meta-equations everyone comes to trust, but let that sort of thing be an emergent property, rather than built into the base concept.

With web-math, implemented as a web browser extension I suppose, you wouldn't go to a trusted oracle to get your answers - you'd search for someone who claims to have solved similar problems, grab and if necessary modify or combine their work, play with it, and if you come up with something interesting (maybe a new type of rocket engine, described as an overlay to a published "rocket science math" equation set), then publish it for others to criticize or admire.

Avoid falling into the copyrighted content trap - publishing with web-math should inherently make the math content public domain. If someone doesn't want their data or equations copied, they can keep them hidden and ask people to trust them - they have no need for web-math, and people can decide whether to trust them.

If copying and claiming credit for others' work becomes a problem, something like the web-math foundation mentioned above could solve that by allowing people to record first publication of new equation sets. But I doubt it'll be a big issue except among professional mathematicians.

33:

Alex Tolley @ 15: If everyone has tiny recorders recording their lives on super high density storage, then how can the police prevent observers of their behavior recording their actions?

In a Stasi-type surveilance environment, with regular downloads from the appointed observers/agents to police servers, the data is valuable for both regular police purposes and political repression, as well as internal audit of police procedures. OTOH, if duplicate copies of some useful number of these datastreams are downloaded by the Underground for their planning purposes . . .

34:

Of course, only the police uses Alpha search and the grafted on eye cam. Who else would dare to do this? Look a secret policeman deep into the eye, and learn something new about ubiquitious surveillance.

On the other hand, every petty criminal nowadays has their legs regrown in a matter of days - bone printed, I should say. Or their finger or hand, after they are catched and cut.

35:

Charlie@30:

Perhaps I'm jumping the gun. I read this:

Looking back over Charlie's previous blog I remembered the nano AV camera for military use.

as referring to something extremely small. I'm not saying anything about devices whose size is on the order of a few wavelengths of visible light. The problem is that small cameras simply can't see as far as large ones. And this happens at the macroscopic level, that is, a camera the size of one facet of a fly eye isn't going to be seeing anything much past a few centimeters. For basic physical reasons. Are these nano AV's fairly large then?

36:

@10: "Recent discovery of Crohn's Disease cure..."

When did that happen? I can't seem to find a reference to a "cure" anywhere, rather it's still described as incurable: http://www.mayoclinic.com/health/crohns-disease/DS00104/DSECTION=treatments-and-drugs

37:

ScentOfViolets: in current usage, "micro-UAV" generally refers to a UAV weighing between 1 g and 1 Kg.(Full-up military UAVs like the Predator drone are the same size as full scale crewed aircraft.)

I would assume that, by extrapolation, a "nano-UAV" is going to be something midge-to-mosquito sized, i.e. under one gram but big enough to be visible to the naked eye. Anything in that range is in any case subject to some funky scaling problems (IIRC you're in a high Reynolds number regime: air is effectively much more viscous for tiny objects than large ones).

38:

Me @36, Jacques Hughes@10,

I didn't see the URL in your post when I first skimmed. I searched a bit and still didn't find much recent info on the stem cell treatment. I see it was first done in 2001, but apparently isn't in widespread enough use to be considered a cure.

Interesting bit of info though.

39:

Greg @ 21:

The embedded camra/recorder, can, of course ALSO be used by the police - as Charlie noted some time back.
This reminds me of the story Paul Krasner, then editor of The Realist magazine, started sometime in the 1960's, that some FBI agents had had themselves castrated so they could fit holsters in their groins. Of course, a cop wouldn't have to replace an eye with an eyecam; it could always be embedded in the forehead, like a third eye :-)

Alex @ 31: One nanocam != useful optics, but many nanocams spread across an area == very useful optics if they can talk to each other and form an interferometer. The biggest problem will then be light grasp. This was what Vernor Vinge suggested as a way to use nanosensors in general; nanomicrophones could work the same way to give highly accurate placement of sound sources.

40:

Bruce: I don't think optical interferometry is practical with very small devices -- AIUI you need more bandwidth between them to run the interferometer than you could cram into the wavelengths of light that they're capturing. That means, not radio-frequency signaling but UV signaling (or possibly multiplexed radio-frequency signaling). It's going to get very, very hairy indeed.

Acoustic microphones are another kettle of fish entirely.

41:

@35:Don't tell the insects that. Especially not the ones that can track a moving target from considerably more than a few centimetres away.

@Alex: fair point. Any takers for an improved spine?

42:

@6:

Pig bladder extract? Like this?

43:

Charlie@40 - So maybe have midge-UAVs aim mirrors to focus an image back onto a wasp-UAV that carried them in and sprayed them out like a mist? The midge-UAVs just need to be able to be tracked, or settle onto a stable surface near the target, and have enough intelligence to respond when polled (so they can be precisely located), and aim their mirror precisely as instructed, probably relative to a reference IR laser and a reflected and modulated signal off that IR laser from their wasp-carrier.

44:

I spent a lot of time with NKS. If I could have taken anything out of it, I might have said "invested" instead. Alpha reminds me of Charlie's reference to fusion power - was it in Accelerando? - "natural language" is always just about 5 years away. A miniature camera in your eye without a bionic interface is not big deal - we could embed the same in some sort of sunglasses and you get to keep your eye instead of staring in Robocop XX. Give me a way to "project" a video stream directly into my brain and I will be awed. The bone printer is above my comprehension. I have seen the results of 3D printers and hated them - don't put such contraptions in a living body, FPS. Charlie, I need to buy your books for the Sony Reader or hack them somehow - I have banned the dead trees from my house. Really looking forward to 419. All the best!

45:

Bruce @41: "but many nanocams spread across an area == very useful optics if they can talk to each other and form an interferometer."

The MIT "microscope" that uses no optics but just software as a target is moved across the face of a CCD behind micro perforations is just such a nano camera that uses many light sensing elements in a non optical way. I think you are right that aggregation of sensor signals is the way to go.

At a recent SETI talk by Jill Tarter, I was interested in her chart of the cost of hardware vs computing power in radio telescope array design. The trend was clearly in the direction of much smaller dishes and this has a long, long way to go.

46:

Stand Alone Complex navigated the "police vs. camera-eye" plot very well: higher-ups used the "intercepters" to spy on the police force, implanting them in officers during a routine medical exam. Although initially intended as a surveillance technology for use with suspects, this particular plot uses them as a means of keeping detectives away from crucial information that might help them break a controversial case. (Sidebar: this kind of plot is one of the reasons I enjoy the series. Tech like the camera-eye could very easily turn into a uniforms vs. civvies story, but I like the more thoughtful consideration of what the tech would mean within the structure of power.)

I agree with the sentiment @#13 that the license fee for bones might be prohibitively expensive. Then again, I'm not sure how unique bones are, aside from the stem cells. You might wind up in a situation where you had to pay a licensing fee to use your own stem cells to re-grow the bones or other organs, but only if you were in a position to continually need replacements as a result of disease or occupational hazard. (This could pose a real problem for professional stuntpeople.) On the other hand, I can totally see aggressive, fear-mongering insurance agents offering new parents (and adult children caring for their aging parents) monthly payment plans on re-grown hips, arms, teeth, etc. (Child protection agencies could start watching insurance claims for possible patterns of abuse; Olympic committees might begin denying young gymnasts entry if their shins have been replaced.)

The confluence of these stories, for me, is in an assisted eye that uses natural language to detect tagged devices/Arduinos in everyday life, and can also read copyright/corporate information on freshly-printed bones, perhaps based on embedded fractals at the bone level. This would be useful in either the Olympic committee or child-protective pursuits. Of course, you'd have to photograph the bone, not simply X-ray it. Live cell imaging might help, especially if the newly-forming cells unfolded at a rate different from nascent cells. Then it's just a matter of injecting some dye and taking a sample. But that might only serve only as evidence necessary for obtaining a warrant/court order.

Anyway, I'm sure you'll come up with something. You always do. :)

47:

Andrew@17: I don't think Wolfram Alpha calls so much for disbelief as for a strong grain of salt.

I think we know basically what such a design can deliver, given our current capabilities at AI: it can match questions to known types of grammar patterns, match subjects with known types of facts and symbols, and solve symbolic problems. It cannot really understand the meaning and subtext of the question, and give you an in-depth answer, because that would be strong AI.

(By the way, I think it's unfair to say "true AI" -- what we have is true AI, in the sense that computers can artificially perform intelligence tasks for us that used to require a person or a dog. They're just not a plug-in replacement for a human)

So, it's reasonable to assume that, given what we know about AI, such a system could very effectively answer most 9th grade math homework, even questions with tortured logic which seem terrifically hard for a human (but are very well formed syntactically).

The problem with these sorts of systems is that underneath, they're always built on what amounts to syntactic pattern recognition. This works for simple questions, but it's very brittle and will sometimes give very bizarrely irrelevant answers.

It also can't possibly get all math problems right, because some problems require not only strong AI, but social context. For example, a (yes, real) question from a university math text of mine (I just made up the numbers though):

Q: Batman is chasing after a villain, leaping from rooftop to rooftop. Batman 2m tall, and is running at 8m/s. He must leap to the next roof across a 7m gap, and the next roof is 1m below his current position. Will he make the jump?

A (in back of book): Yes, he's Batman.

48:

Since Alex @ 31 mentions Slow Glass:

Quantum Doughnuts Slow And Freeze Light At Will: Fast Computing And 'Slow Glass'

ScienceDaily (Mar. 9, 2009) — Research led by the University of Warwick has found a way to use doughnut shaped by-products of quantum dots to slow and even freeze light, opening up a wide range of possibilities from reliable and effective light-based computing to the possibility of "slow glass."

...

Fischer et al. Exciton Storage in a Nanoscale Aharonov-Bohm Ring with Electric Field Tuning. Physical Review Letters, 2009; 102 (9): 096405 DOI: 10.1103/PhysRevLett.102.096405

49:

@ 31, 48. "Slow Glass" ... how much energy can you store in these rings? Particularly if you put a lot of them together? And, of course - any big energy store has two obvious applications. Useful "batteries" and weaponry ......

50:

Justin @47 remarks: It also can't possibly get all math problems right, because some problems require not only strong AI, but social context.

Alas, no one has yet realized that all problems involve social context.

Example:

Q: Hey, Wolfram Alpha, how do I reduce crime in New York City?

A: Kill everyone in New York City.

This works, but it's socially unacceptable.

Q: Okay, Wolfram Alpha, how do I reduce crime in New York City without killing all the inhabitants?

A: Imprison all the inhabitants of New York City.

That works too. Still socially unacceptable.

Q: Hey, Wolfram Alpha, give me a proof of Riemann's Conjecture.

A: Riemann's Conjecture can be proven by direct examination.

This is true, but since it would take an infinite amount of time, not useful.

You can see the problem. Apparently, AI researchers can't.

To complete this koan, we would append: And then the AI researcher read this post, and at that moment he received enlightenment.

But that would be science fiction.

51:

Alex@45,

Those small dishes for the Allen telescope were used because of cost. They are actually mass produced for satellite TV so they are much cheaper than the dishes used for other radio interferometers such as the Very Large Array in New Mexico in the US or the Westerbork Array in the Netherlands which are a few times larger. Larger dishes are good because they have better sensitivity than smaller ones. The Allen telescope gets around this problem by simply having more of these smaller dishes.

Smaller dishes do have the property that they have a larger field of view, which for the Allen telescope is about 2.5 degrees versus something like 0.5 degrees for the other two when the observing frequency is about 1.4 GHz. This means they can observe more objects at the same time, but that can make imaging tricky since optical aberrations become more of a factor when one has a larger field of view. However, since they are looking for signals that are likely to be unresolved, this last factor is less important.

Radio telescopes of the future, at least those operating at frequencies under say 40 GHz probably won't be using dishes. The Square Kilometre Array project, is to build a radio telescope with hundreds of thousands of small antennas. The huge numbers will make it about 100 times more sensitive than any existing system and the field of view would be most of the visible sky. This is only now possible because of computers have now become cheap and powerful. So yes, you are correct that future radio telescopes will have smaller antennas, except that they won't be parabolic dishes.

52:

Justin@47:

I think we know basically what such a design can deliver, given our current capabilities at AI: it can match questions to known types of grammar patterns, match subjects with known types of facts and symbols, and solve symbolic problems. It cannot really understand the meaning and subtext of the question, and give you an in-depth answer, because that would be strong AI. (By the way, I think it's unfair to say "true AI" -- what we have is true AI, in the sense that computers can artificially perform intelligence tasks for us that used to require a person or a dog. They're just not a plug-in replacement for a human)

Well, that's why I try to use tic marks whenever possible - 'real' AI or 'true AI', etc. See, for a long, long time, people have derided certain implementations as 'mere lookup tables'. In fact, the claim has been made time and again that no possible set of lookup tables could pass the Turing test; if a machine did, it wouldn't be using brute force and would plausibly be said to be 'intelligent'.

I think this underestimates both the cleverness of human designers and the degree to which these sort of stratagems can plausibly mimic 'true AI'. I could see a plausible future where a robot domestic could mow the lawn, wash dishes, do the laundry, etc. I could also see a robotic waiter able to discuss the menu, what's good and what's so-so, how the cook is feeling with diners in a very natural way, a robot wine steward able to converse knowledgeably about history and vintage, types of grapes used, etc, again in a very natural human way. But from the underlying implementation, it would be clear that there's nothing going on behind the patter. There won't be a Jeeves any time soon. The inestimable, the peerless, the one and only Jeeves will be a long, long, long time abuilding(if a Jeeves can be fashioned by anything other than an Act of God.)[1]

[1]Has there ever been a Jeeves to put House in his place? Having seen the BBC version of Wooster first, the contrast is, er, jarring.

53:

These radio telescopes based on large numbers of small antennae sound very like Phased Array Radars, which go back into the Sixties. There's a lot changed in how they do the adjustments and analyse the signal, and a radio telescope doesn't have to transmit.

BSB, who lost out to Sky in the UK satellite TV market, was using flat-panel antennae; what could you do with the mirrored glass on the side of a skyscraper? Flat roofs, and a multi-campus university?

Might clash with the need for solar power.

54:

David@51: The actual costs of the dishes is not important, it was the relationship of antenna hardware costs vs the computation that was significant. The optimal dish size was being reduced as the cost of the electronics declined. Another 10 years and the ATA would have been built with even more, smaller dishes.

There is also the LOFAR project that uses "ridiculously cheap antennas" spread across northern Europe as a radio telescope.

LOFAR

55:

Dave@53,

Essentially you are correct, but in addition, the antennae elements will be spread out over many hundreds of kilometres and several or more beams can be formed simultaneously. That is the telescope could be observing simultaneously different parts of the sky at the same time. The whole cost for the SKA is estimated to be about $US 2 Billion and there are about a dozen countries involved. The location will be either Western Australia or South Africa.

56:

Alex@54

Yes, I meant the dishes and also associated electronics. That they commercial products did make them cheaper which was my point. OK, LOFAR will be ready a lot sooner than the SKA. Don't forget there is also the LWP in New Mexico which like LOFAR is geared for low-frequency (under few hundred MHz) radio astronomy.

57:

The perfect diamond future of Alzheimer's-suffering elderly osteoporotic cam-pirates, enabled back to socially meaningful lives by having their eye camera feeds prompt omnipresent advice from Alpha. But not everything is perfect in the paradise, as Alpha falls madly in love with a BTZ hydraulic roofbolting rig and sends its brittle-but-regenerating troops on a class trip to a Polish copper mine.

58:

mclaren@50: Deriding AI researchers like that isn't really called for. Most AI researchers aren't trying to create a general-purpose machine, they're trying to solve limited problems. Limited problems can have any necessary social context hand-rolled.

Remember, AI is about creating machines that can perform intellectual work for us that used to require a human or a dog. In fact, we've already been fantastically successful at this. For a great example, look at machine translation: one can now go to Babelfish and receive a translation more than good enough to puzzle out most documents.

Does "full translation" require social context? Of course it does. But a machine that just has grammar, syntax, and a detailed dictionary can do a pretty useful job. In the past, you would have had to hire someone, and if they weren't highly fluent in both languages, you still might get a terrible answer.

59:

ScentOfViolets@52: Those sorts of "lookup tables" type arguments have always struck me as pretty funny. I mean, we've known since 1937 that all sufficiently advanced computers are effectively identical mathematically, and furthermore, that one can approximate the laws of physics using one. So in principle, a Commodore 64 with a sufficiently large disk attached could simulate a human being down to the molecular level, if one didn't mind waiting the age of the universe to ask a question. And the molecular simulation underneath would just be based on some simple math, "lookup tables" if you will.

The real criticism of current toy AI's (particularly ones people put up for "Turing test" challenges) is that they lack most of the facilities we associate with intelligence: the ability to learn and adapt to new situations, to make decisions based on reasoning, and so on.

Of course, this is also why I have my doubts we'll be able to build a robot housekeeper that's much more than a Roomba any time soon: if we built a machine that worked in one house, it might be dangerous if you merely changed the furniture around. Even humans sometimes break things, so people with expensive possessions are often willing to pay quite a bit to hire someone who's unusually competent. The Roomba is acceptable because, well, it's not particularly dangerous. You expect it to just bump around at toe level.

60:

@59 No We've known since 1937 that all SERIAL computers, irrespective of their "advanced" states are identical mathematically. What happens with a massively Parallel machine, which also has extra internal linking connections, like a mammalian brain, or thecube/hypercube arrangements, discusses a long way up this thread - we don't know. We can only (so far) make supposedly educated guesses.

However, even with "only" 100-processors-a-side, that means 10^6 processors, PLUS the myriad interconnections (14 per processor on the inside, remember), so who is going to take the time to build one, where is the money going to come from, and how much power will it consume?

I think Charlie had better go and see if he can get some research start-up money from Mr Blofeld, before he kicks off ......

61:

Greg: Sorry, I'm busy installing OS X on my HP mini right now ...

62:

Greg @ 60: Sorry, no, the Turing Machine equivalence holds for parallel machines also. The speedup for parallel arrays of serial machines (which is what a parallel computer is, no matter how many nodes it may have) is only polynomial. The things that we think aren't equivalent to TMs, like quantum computers, mammalian brains, and the universe itself, are different not because there are a lot of pieces running at once, but because they compute things differently than TMs do (in the case of brains it's unclear whether "compute" is the right word; certainly brains don't execute algorithms in the sense that computers do). Quantum computers give exponential speedup for certain classes of algorithms, and in some senses the universe is a really, really vast quantum computer, so it does too.

63:

Bruce @ 62 is almost completely correct. He's doing a better job of polite clarification of geeky stuff on this blog than I have done.

Media almost always make misreprentations about how much speed-up a Quantum Computer offers for a given algorithm. For the semitechnical reader, I recommend blogs by two real experts: Scott Aaronson (Shtetl-Optimized), and Dave Bacon (Quantum Pontiff).

On another points:

Experimental Nonfiction Science can't solve the mystery of life, but it can make it a lot more fascinating. By Jennifer Fisher Wilson reviews "The Ten Most Beautiful Experiments" (released in paperback this month), noted science writer George Johnson describes how some of history’s most remarkable experiments overturned long-held theories about nature. At the denouement of each of these experiments, Johnson writes, “confusion and ambiguity are momentarily swept aside and something new about nature leaps into view.”...

“the most temperamental piece of laboratory equipment will always be the human brain.”

64:

Given the personalities involved, and the hype, I predict that Wolfram Alpha will be a Cuil Killer capable of, once and for all, answering clearly and emphatically that most urgent of questions: "How is babby formed?"

65:

I side with the takers of salt on the Wolfram Alpha news. Cyc, mentioned upthread, has been an albatross around AIs collective neck for more than 20 years, continually promising "common sense reasoning" if we just add one more order of magnitude of basic rules. Well, we know damn well the human mind doesn't work that way; children learn* common sense without memorizing billions of rules.

But I don't think that Wolfram is comparing Alpha to Cyc. I met Wolfram once; he didn't strike me as a boastful or hyperbolic person. I also worked for several years with the mathematician who became his primary user interface designer, and who told me a little about the internal culture of Wolfram's company. I'm betting that they have something useful that will pretty much fulfill the promises that Wolfram made, but that isn't even close to what the reporters are saying it is. Careful reading of the news is required here to understand what is actually being promised and what is just the bubbles in the froth of the reporters' uneducated minds.

  • Cyc doesn't really learn, so children beat it every time.
66:

Bone printing is very cool, but it's only a part of the technology of bioprinting, which is altogether cool. Check out the Medical University of South Carolina's Bioprinting Research Center to learn some more.

67:

Greg@60: Actually, as a matter of fact, we do know how massively parallel machines with zillions of interconnects behave mathematically, and we have since 1937:

They behave exactly like serial machines, only (for some algorithms) faster.

What a massively parallel supercomputer with a billion cores can do, a C-64 can do. The C-64 is just slower. A nice fellow named Alan Turing proved this.

And furthermore, the speedup from having billions of cores is only linear, at best. If you put a billion C-64s together, at most, they go a billion times faster. In practice, they go much less than a billion times faster, because parallel algorithms are difficult. Some algorithms are very difficult or impossible to parallelize, and so a billion cores won't be any faster at all for those problems.

The only technology we're looking at which is fundamentally faster than regular Von Neumann machines is the quantum computer, which can theoretically do certain specific algorithms exponentially faster. However, even though they're faster, they're still Turing Machines, and as such, your old C-64 can solve the same problems as the ultimate quantum computer if given enough time.

Fundamental differences between computers are in terms of speed and storage, not in terms of algorithmic capability.

68:

Bruce@62: A quantum computer is equivalent to a Turing Machine, too. It's just a slightly different kind of Turing Machine.

Since the Turing Machine was first proposed, a large number of alternate formulations have been proposed for the purposes of seeing whether different concepts of them are equivalent. The answer is that they all are equivalent in terms of decidability, but not necessarily in terms of performance.

Your basic TM (the math abstraction, mind you) is a machine with an infinite symbol tape which it can both read and write. Variant versions include a TM with multiple tapes (equivalent), multiple heads (equivalent) and so on.

Another type of variant is called "non-deterministic" in that the idea is that every time the TM makes a choice, it takes all possible paths simultaneously. It finishes the computation when any of its selves finds an answer -- this is conceptually similar to the multi-worlds interpretation of quantum physics.

It turns out that a nondeterministic Turing Machine is ALSO equivalent to a regular Turing Machine in terms of decidability. However, it can efficiently solve problems which we believe (but can't prove) a regular TM can only solve inefficiently. This is the classic P=NP problem in computation theory, where P=regular TM, and NP=nondeterministic TM.

This also leads to a common misconception about quantum computers: quantum computers are NOT nondeterministic Turing Machines. Nondeterministic Turing Machines are (probably) much more powerful. Quantum computers solve a particular class of probabilistic algorithms that are thought to be a subset of NP.

If we could actually build a computer that solves NP, the sorts of problems it could solve would actually be rather frightening. For example, unlike a quantum computer, it could break any encryption algorithm we could write (even in theory) that can run on a regular computer, so long as it involves keys. It could instantly derive math proofs out of practically nothing, and so on.

As for the human brain, well, we actually have no reason to believe that the brain is special. If it works like experiment indicates (by passing messages between neurons, which make decisions based on inputs), then it's effectively equivalent to a massively parallel computer, that is, an ordinary TM.

We still don't know quite how to relate the power of an individual neuron to a Von Neumann computer, though, or how the brain's algorithms work. So it's still a mystery how powerful the brain really is -- there are lots of wild ideas floating around (1023 ops per second, etc.) but the truth is nobody knows. Various organizations are currently looking at building supercomputers with a million cores within a few years, and it's not completely absurd to think those machines might actually be more powerful than the human brain in raw power.

69:

As for the human brain, well, we actually have no reason to believe that the brain is special. If it works like experiment indicates (by passing messages between neurons, which make decisions based on inputs), then it's effectively equivalent to a massively parallel computer, that is, an ordinary TM.
I have to disagree with this; we do not know how the brain's states succeed each other; we know a little bit about the neuron's states. What we don't know is that the result is anything like digital computation. It probably isn't much like analog computation either (and analog is definitely not Turing Machine equivalent), but I've never seen a proof that an artificial neural net is Turing equivalent. Yes, a digital computer simulating a neural net is Turing equivalent, but that's not the same thing.

70:

"Charlie's challenge" about combining the three stories kept me awake last night. I put together a little speculation along the same lines using Alpha, Twitter and an unusual ad from Craigslist about a DIY bike. I twisted the argument made at @50 above - what if you don't use it to answer question, but to rant interestingly about everything? Some tweets (140-characters long) would not pass for messages from an intelligent human being anyway. Would Alpha pass the Turing test then, in your opinion?

71:

Bruce@69:

Bruce, molecular dynamics is computable. This basically means that it doesn't really matter how the brain works: it is also computable.

And in fact, its complexity seems relatively tractable: we have a rough idea how many neurons there are, and how many synapses there are. The part we're still not sure of is how complex an algorithm it takes to model an individual neuron. If you could model 100000 neurons with a modern computer, our supercomputers will be at par within a few years. If you could model 1 neuron, well, we have a ways to go.

but I've never seen a proof that an artificial neural net is Turing equivalent

Here you go -- I used Google.

Of course, these "power vs. human brain" questions are always silly. We have no idea how to write a general-purpose AI, either, and we don't know how our algorithm would compare to the brain's algorithm.

72:

71: that's a proof that a Turing machine can be modelled on a neural net, not that a neural net can be modelled on a Turing machine...

There's also the Penrose theory that the brain doesn't just use molecular dynamics.

73:

Molecular Dynamics may be modeled on supercomputers doing discrete digital approximations to Quantum Mechnics, but Nature integrates in real time in another way entirely. That is what prompted Feynman to be the great-grandfather of Quantm Computing. Use quantum computers to computer quantum mechanics, he said.

Whether or not Penrose is right (he's obviously a far better Mathematical Physicist than I) he at least gives a novel spin to the question: WHERE does thought take place? I think that it does use molecular dynamics, just not in the conventional model. Nobel Laureate Brian Josephson thinks that something spooky and nonlocal is going on, which allows for genuine non-inverse-square-law telepathy. (he's also obviously a far better Mathematical Physicist than I). The first Nobelist I know who denied that thought took place in the brain at all, in an idiosyncratic take on mind-body duality, Sir John Carew Eccles, Australian physiologist [born 27 Jan 1903, Melbourne, Australia]. His analogy was that the brain is an extremely sensitive radio receiver, picking up signals from a non-material mind/soul. According to Eccles, we have a nonmaterial mind or self which acts upon, and is influenced by, our material brains; there is a mental world in addition to the physical world, and the two interact. Eccles denies that the mind is a type of nonphysical substance (as it is in Cartesian dualism), and says that it merely belongs to a different world. (How the Self Controls Its Brain, p. 38.)

Really, WE DON'T KNOW. There is a chance that this will be understood in this 21st century.

I've lost track of which thread discussed the demise of book and magazine distribution.

From Jason Boog at GalleyCat: Four Publishers Sue Anderson News for $37.5 Million

Publishers Hachette Book Group, HarperCollins, Random House, and Simon & Schuster have sued Anderson News in federal bankruptcy court, trying to recover the total of $37.5 million the companies say they are owed by the distributor. Founded in 1917, Anderson News sells "magazines, books, comics, maps, collectibles and more" to 40,000 retail outlets around the country.
74:

And in fact, its complexity seems relatively tractable
This seems unlikely; we know that the brain is a complex system composed of subsystems whose dynamics are chaotic, and which are connected via feedback loops. That architecture can't be accurately simulated over any length of time, anymore than the solar system, which is much less complex and is about 5 billion years old, can be simulated accurately over more than a few hundred million years.

75:

I consult for Wolfram, so I've played with Wolfram|Alpha a fair amount. Give it the input, "life, the universe and everything" it will give you the correct answer of "42."

If I type in my own name, while it doesn't know about my science fiction career, it does know about items in the Wolfram Demonstrations Project that mention my name.

If I enter "Charles Stross" it tells me you are a novelist and gives your date and place of birth. Same basic info if you enter "Bruce Sterling".

If you enter "roast beef" it gives you very detailed nutritional information. If you enter "cyan" you get detailed and useful information about the color cyan. But it is easy to trip up: its defaults to a county named vermillion. It's not perfect.

Where I've had the best experiences with it are with my 11-year-old so and his homework. Having him enter homework problems that he finds intractable it where I've been able to use it to best effect so far.

76:

Justin (50) I just tried all your questions. Wolfram|Alpha (which I have live). Wolfram|Alpha thinks you should rephrase your questions. Eliza, it ain't. But Wolfram|Alpha was not designed to pass the Turing Test.

77:

ajay @ 72: 71: that's a proof that a Turing machine can be modelled on a neural net, not that a neural net can be modelled on a Turing machine...

Actually, I think that paper argues that the specific type of neural net under study is equivalent to a Turing machine, which means it can be modeled on a Turing machine (and vice-versa).

(And I'd guess that the majority of neural networks that are actually used for computing things are simulations running on digital computers. Neural nets are modeled on Turing machines all the time -- it's much simpler than actually building a hardware neural net.)

There does, however, appear to be a hypothetical class of neural network ("analog recurrent neural networks") which can compute some things that Turing machines can't -- see here. The basic distinction, as I dimly understand it, is that if you can do computations with all of the real numbers -- not just rational numbers or computable irrationals, which is what digital computers are limited to -- then you can solve some kinds of problems that Turing machines cannot.

Whether such a network is physically possible is another question, though.

78:

That's the problem, Peter. There is a certain meme going around that Analog Is Better for certain things, like human-equivalent AI, or that analog can do certain things that it's just plain impossible for digital machines to do.

The problem is, in the real world It Just Ain't So. Because in the real world, procedures are most definitely limited to a rather smallish subset of the reals.

Take for example the classic analogue storage technique of the two lines (I first read this one in a Mathematical Games Column): Make one line your reference line with a length of 1. Convert your data into a convenient numeric representation, say ascii, and append it to 1., like this:

1.78105103104116327197108108101114121321141011031171089711410812132112114101115101110116101100329710097112116971161051111101153211110232991089711511510599321029711011697115121321169710810111532981213297117116104111114115321151179910432971153272463280463276111118101991149710211632971153211910110810832971153211111410510310511097108321191111141071154432109971101213298121328310111410810511010332104105109115101108102468410410132115101114105101115321199711532105110116114111100117991011003211910511610432973211210510811111632848632109111118105101321161049711632971051141011003278111118101109981011143256443249575457443210210197116117114105110103321161041013210010511410199116111114105971083210010198117116321111023283116101118101110328311210510110898101114103329711010032111110101321111023211610410132108971151163297991161051101033211210111410211111410997110991011153298121327411197110326711497119102111114100463285110108105107101321161041013211510111410510111544321191041011141013211610410132112971051101161051101031153210910111410110812132979999111109112971101051011003297110321051101161141111001179911610511111032116111321161041013211711299111109105110103321151161111141214432116104101321129710511011610511010311532116104101109115101108118101115329799116117971081081213297112112101971141011003210511032116104101321161041141011013211510110310910111011611544321151011141181051101033210997106111114321111143210910511011111432112108111116321021171109911610511111011546

This second line then, can 'store' as much information as you like, entire encyclopedias, the Library of Congress, what have you. And all in two lines whose total length is less than three units. Such is the power of analogue representation using real numbers. To recover this information, simply compare the storage line to the reference line and write out the ratio in reconverted ascii-to-text form.

Something is wrong with this idea, but I just can't put my finger on it . . .

79:

78: what's wrong with it is limited resolution. You can't make a line any length you want - it has to be a whole number of atoms long, at least. So does your reference line. And that means that you can't store any data string you want - you can only store data strings that are ratios between two whole numbers (because the length of each line will be a multiple of the width of an atom), which is a fairly serious constraint...

80:

ajay@79 correct imho. kathryn@75 I played a lot with expert systems, which is a different type of beast. Training a neural net is probably the most idiotic, redundant, boring thing a human could do for a decent wage. Alpha's answer 42 might be correct based on the information that was put in already. Gods help us if it will remain the same later - how many will say "it was under our eyes ALL the time?" :-)

81:

ajay@79:

78: what's wrong with it is limited resolution. You can't make a line any length you want - it has to be a whole number of atoms long, at least. So does your reference line. And that means that you can't store any data string you want - you can only store data strings that are ratios between two whole numbers (because the length of each line will be a multiple of the width of an atom), which is a fairly serious constraint...

That's precisely my point. Analog systems may 'win' over Turing-limited computation, but only because they make assumptions that simply don't hold in the real world. Yes, infinitely divisible matter that we could manipulate at any scale would be some nice stuff to have. Unfortunately, we don't. So practically speaking, it's all Turing-material.

Gabriel@80:

I played a lot with expert systems, which is a different type of beast. Training a neural net is probably the most idiotic, redundant, boring thing a human could do for a decent wage. Alpha's answer 42 might be correct based on the information that was put in already. Gods help us if it will remain the same later - how many will say "it was under our eyes ALL the time?" :-

I suspect that the sort of AI we're going to see in the future is going to be of the expert system type. Something that will in a limited domain behave pretty much like we would expect 'true' AI to behave. Unfortunately (I'm of the school that thinks that for the most part, most of the easy stuff has already been done), such an expert system will be literally generations in the making. It will be very, very, complex, use up terabytes of memory and petaflops of computational resources . . . but will be able to do a great many things we would like an automated servant to do for us.

Such an expert system, or rather, a set of expert systems, would look an awful lot like the endpoints of an evolutionary process (perhaps not very surprisingly.) And being very, very complicated, new expert systems would be more often the result of tinkering with an older version to achieve the desired results; the odds of new ones being built from scratch would be very low, the associated manpower costs being what they are.

I think that Asimov actually made this point in his Robot stories.

82:

ScentOfViolets@81 not sure I got your point, but just to make sure I delivered mine. ES are good when you have a clear set of rules applied to a predictible range of inputs that you want to process with super-human (hm hm) speed and accuracy. They are not complex, but rather large depending on the system tested. Think of testing new ICs or networking protocols implementations - you need to run billions of data packets to simulate as much as possible of the normal working conditions while still in design and testing phase. An SE will never generate anything new (as in original - it was not designed to). A large neural network (from my limited experience) learns from the operator, than keeps learning from its own experience via feedback - good anti-spam software, for example. Try some voice-recognition app if you want... almost immediately it will start learning from its own mistakes. The same will apply to Alpha - only that it will learn from millions of human operators at the same time and rely heavily on the tremendous pile of cr*p named Internet to make them engage in dialogue. Hence the answer is 42 for now.

83:

Bruce@74: Simulating something accurately for a long period of time is necessary if you have an accurate model of Sally's brain, and you want to predict what she's going to do. This is probably impossible, since the scope of the simulation quickly becomes intractable -- you have to simulate her entire body, and the city she lives in, and everyone else in the city, and the sunlight, and the cosmic rays, and... Of course, mathematically, that's all computable, too. Even quantum physics can be computed -- it's just that the answer may be somewhat unsatisfying, because you get back a list of probabilities, and it takes you 101000 years to do anything.

However, if all you want is a mere AI that's as smart as Sally, that's not so hard. You just need an answer that's useful, not the exact same answer over long periods of time. So a computer with, say, as many CPU cores as the human brain has neurons should have no trouble.

84:

Justin @ 83:

Simulating something accurately for a long period of time is necessary if you have an accurate model of Sally's brain, and you want to predict what she's going to do.
The discussion started about the possibility of uploading; the "existence proof" that Kurzweil uses requires accurately simulating portions of the brain in silico while they communicate with other pieces still in carno. My argument is that the chaotic nature of the brain makes this very difficult (when Kurzweil says it's easy given the compute power) if not impossible.

I agree this doesn't prevent strong AI in any way; I am saying however that it makes the creation of an AI a different problem than just throwing a few billion neurons together. Understanding architecture at many levels is vitally important to understanding (and emulating) the operation of the brain, and we won't be able to do it well unless we understand the nature of its chaotic aspects. Some of that chaos almost certainly comes from inter-level feedback loops; mapping and understanding them will be difficult, and checking the understanding by comparing simulations to the real thing will also be difficult because of the chaos.

85:

Bruce, I haven't read Kurzweil -- did he claim to have invented that "existence proof" himself?

Because I swear I read Hans Moravec on that subject in the late eighties ...

(I have very little time for Kurzweil -- I have the strong impression that he's primarily a self-publicist.)

86:

Bruce@84: Blast you! We cannot agree, this is the internet! Um... Frogs! Frogs in space suits!

I think we were arguing a bit cross-purposes. I was just trying to argue that strong AI is clearly possible, by rather basic facts about molecular physics (it's computable, and Turing Machines... compute), and that furthermore the human brain isn't magic. I certainly never meant to imply that programming a human-equivalent AI is easy or that we'll have brain scanners ready for upload by 3pm next Tuesday!

Personally, I suspect we'll succeed in building a strong AI long before we truly understand the operation of the brain.

Based on the size of the genetic code compared to the complexity of the brain, I think it's pretty clear that the brain is strongly self-organizing. Given enough computing power, a lot of trial and error, and someone willing to raise a baby AI for 20 years, I think we'll eventually succeed.

Really understanding the brain, on the other hand, might ultimately require strong AI just as a testbed. Uploading seems to be hamstrung by a simple technology problem: how do you individually reverse-engineer 100,000,000,000 neurons without killing the subject (and destroying the brain before you're finished)? If you could, we could certainly store the image today and perhaps run it at full speed within a decade or two, but imaging a brain seems wildly beyond our capabilities for the near future. And without being able to image the brain, I have my doubts how well we can understand it.

But hey, what do I know? That class in AI was full when I tried to sign up for it years ago, and I never took biology.

87:

Bruce @84

The discussion started about the possibility of uploading; the "existence proof" that Kurzweil uses requires accurately simulating portions of the brain in silico while they communicate with other pieces still in carno. My argument is that the chaotic nature of the brain makes this very difficult (when Kurzweil says it's easy given the compute power) if not impossible.

I agree this doesn't prevent strong AI in any way; I am saying however that it makes the creation of an AI a different problem than just throwing a few billion neurons together.<

It doesn't say much about the creation of an AI either, unless that AI is to be the continued existence of a previously meat-based mind. I have always maintained that running an AI is the easy part; it's uploading from the meat that's going to be Real Soon Now for a looong time.

AIs that start out as AIs I may well live to see. I'm not expecting the chance to be uploaded.

JHomes.

88:

Ah, but real extropians aren't so easily defeated! Next up in the line of defense is cryogenics. In a pinch, just the head will do. Then, given AI, and uploading, and the ability to accurately read damaged cells and their interconnections, why, beating death will be a piece of cake.

Seriously, I wouldn't be a bit surprised to see cryogenics scams pick up over the next thirty years or so. Maybe by 2050 there might even be one or two legitimate and viable concerns. And by 2060, they'll be the next bubble.

89:

Charlie @ 85: I can't remember if Kurzweil claims the argument is his; it definitely isn't. Moravec did use that argument, I think in a book in 1988. ScentOfViolets quoted a piece from Hofstadter and Dennet's The Mind's I called "The Story of a Brain" by Arnold Zuboff, describing the same argument and raising philosophical objections to it. That was printed in 1981; it's the earliest version of the idea that I know about.

Justin @ 86: You may be right that strong AI is easier than understanding the brain; it depends on how many different ways there are to implement an intelligent mind. It's possible that there are simpler ways than the one evolution gave us, but it's also possible that we're as simple as it gets, or even the only possible way. I doubt it, but the only way to find out is to try. Which is a side of AI research that few people seem willing to recognize: we've got an awful lot of data on what intelligence isn't, and how many models of it don't work. Saves the cognitive scientists a lot of work.

ScentOfViolets @ 88:

And by 2060, they'll be the next bubble.
I hope to miss that one. If you think cleaning up toxic assets is bad, imagine one of those scam cryo outfits a couple of weeks after the crooks have absconded with the money, leaving the power bill unpaid, the Dewar flasks warm, and the corpsicles, um, defrosted.

90:

Bruce @ 84--it's also true that if you could reset the complete physical state of the universe to what it was one day ago and restart from there, allowing random quantum events to have different outcomes, then because of chaos, on the "second run" I would probably behave somewhat differently than on the "first run", even though in a qualitative sense I would still be acting like myself. I see no reason why chaos presents any obstacle to the notion that an upload would behave in a qualitative sense like the original brain it was based on, even if it doesn't precisely mirror how the organic brain would behave after the moment of a nondestructive upload (which doesn't really seem like a problem if you accept that my own brain wouldn't even precisely mirror how it behaved in the original version of yesterday if you reset the universe's physical state to what it was at the beginning of the day and allowed yesterday to play out a second time).

91:

How do quantum fluctuations after reset square with the Einstein Bloc Universe?

I'm not sure this would be possible, even in principle.

92:

Jesse M @ 90: The issue isn't whether the simulation of the whole brain will match the original brain, but whether simulating some neurons and connecting those simulations to other real neurons (while constantly converting real neurons to simulations and changing those connections) is going to behave like the original brain. I'm not saying either that it will or won't; I'm saying that we don't know and therefore can't use this thought(!) experiment as a proof that uploading is possible. Yet this argument that is the basis of all discussion of uploading that I'm aware of.

93:

ScentOfViolets: As I understand it, Einstein's view of the universe as a fixed structure in 4-space has to be modified by quantum mechanics. It becomes a tree structure, each branch being one possible outcome of a quantum measurement. Any one history is a linear path from one branch through various measurement nodes, but the entire universe contains all possible paths.

94:

Bruce @62: Sorry, No - modern parallel processors are no longer perfectly synchronous - their results can easily depend on external factors.

We've always had to insert serializing locks in parallel software, because we were not capable of coding in a fashion that took into account the precise timing of computations - and any slight code update would upset our well-timed code even if we did.

Then we started running multiple processes on the same processor, that depended on external events, so external events could affect one process, which in turn affects the timing of another process.

And modern processors' clock rates depend on their temperature - they can slow down a bit if they get too hot - which means that even without explicit external events to trigger processing that affects the timing of our code, vagrant warm breezes can affect their temporal performance.

So unless you want to claim that we can somehow include the entire heat dynamics of the universe into our simulation, parallel processing machines are NOT equivalent to serial processing machines, without introducing perfect serializing locks in the code to make them behave as if they were serial machines.

Finally, even if we introduce the locking mechanisms I mentioned, we're really focused on how a simulated intelligence will interact with the real world, and that means that the time it takes to recognize and respond to real world events will affect how the external world interacts with it - so a slow AI will not get the same inputs as a fast AI.

95:

TomC: I'm not saying serial and parallel machines are semantically equivalent, or can run the same programs; I know better. I've worked with parallel and distributed systems, and I have the scars to prove it.

What I am saying is that parallel computers are Turing-equivalent; they can't compute anything a serial computer can't compute, and vice versa. And it doesn't matter what kind of Turing machine we're talking about: one tape, multi-tape, whatever, they're all equivalent too. Not to say that they all run at the same speed, of course.

96:

combine these stories

Hmmmm. I note

--- the current massive wave of violence in Mexico, attributed to the arrest or removal of the top bosses of several big organizations.

--- the massive purchases of guns and ammunition around the time Obama was elected, which apparently continue

--- the flow of guns and ammunition across the border from the US to Mexico

--- the observation that many smart sociopaths had a socially approved niche in banking and securities and allied industries so long as the Chicago School notion persisted that pure greed was the engine of capitalism and benefited everyone

--- the question what happens when, or if, there's less room in legitimate business for smart sociopaths.

I dunno, I guess you've written it already.

97:

pigxie dust, Goldacre's Bad Science

Oh, there they are now. Never mind.

Specials

Merchandise

About this Entry

This page contains a single entry by Charlie Stross published on March 8, 2009 1:54 PM.

Typo hunt: SATURN'S CHILDREN was the previous entry in this blog.

Horses for courses is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Search this blog

Propaganda