« Why I don't (usually) use Windows | Main | Teaser »

Gadget Patrol: 21st century phone

(This isn't a product review, it's a big-picture overview brought to you from the universe of "Halting State".)

It shouldn't be news to anyone that smartphones — as a category — really took off in the second half of the noughties. Before 2005, few people bothered with PDAs, and fewer still with phones that had keyboards and could browse the web or send email. Current projections, however, show 25% of all phones sold in 2010 being smartphones — and today's smartphone is a somewhat more powerful computer than 2002's laptop.

At the same time, the winners in 2005's smartphone market (Palm, Windows Mobile, Symbian Series 60, 80, and UIQ) are losing ground rapidly (PalmOS is already dead, modulo the Hail Mary pass that is WebOS on the Pré) while strange new mutants slouch towards market dominance — Android, Mac OS X, and maybe Maemo.

What's happening?

Here's my hypothesis ...

Pre-2005, digital mobile phones typically ran on GSM, with GPRS data limited to 56kbssec, or Verizon's CDMA. This badly choked their ability to do anything useful and internet-worthy. By 2005, the first 3G networks based on WCDMA (aka UMTS) began to open up. By 2009, 3G HSDPA networks can carry up to 7.2mbps. The modem-grade data throughput of the mid-noughties smartphone experience has been replaced by late-noughties broadband grade thorughput, at least in the densely networked cities where most of us live. (I am not including the rural boondocks in this analysis. Different rules apply.)

To the mobile phone companies, 3G presented a headache. They typically offered each government billions for the right to run services over the frequencies freed up by the demise of old analog mobile phone services and early TV and other broadcast systems; how were they to monetise this investment?

They couldn't do it by charging extra for the handsets or access, because they'd trained their customers to think of mobile telephony as, well, telephony. But you can do voice or SMS perfectly well over a GSM/GPRS network. What can you do over 3G that justifies the extra cost?

Version 1 of their attempt to monetise 3G consisted of walled gardens of carefully cultivated multimedia content — downloadable movies and music, MMS photo-messaging, and so on. The cellcos set themselves up as gatekeepers; for a modest monthly fee, the customers could be admitted to their garden of multimedia delights. But Version 1 is in competition with the internet, and the roll-out of 3G services coincided (and competed) with the roll-out of wifi hotspots, both free and for-money. It turns out that what consumers want of a 3G connection is not what a mobile company sees fit to sell them, but one thing: bandwidth. Call it Version 2.

Becoming a pure bandwidth provider is every cellco's nightmare: it levels the playing field and puts them in direct competition with their peers, a competition that can only be won by throwing huge amounts of capital infrastructure at their backbone network. So for the past five years or more, they've been doing their best not to get dragged into a game of beggar-my-neighbour, by expedients such as exclusive handset deals (ever wondered why AT&T in America or O2 in the UK allowed Apple to tie their hands and retain control over the iPhone's look and feel?) and lengthening contractual lock-in periods for customers (why are 18-month contracts cheaper than 12-month contracts?). And the situation with international data roaming is dismal. It doesn't hit Americans so much, but here in the UK, if I travel over an hour by air, the odds are good that I'll be paying £6 per megabyte for bandwidth. It's as if my iPhone's IQ drops by 80 points whenever I leave home.

Enter: Apple and Google.

Apple are an experience company. They're a high-end marque; if they were in the automobile business, they'd be BMW, Mercedes, and Porsche rolled into one. They own about 12% of the PC market in the USA ... but 91% of the high end of the PC market (laptops over $999, desktops over $699). How they got into the mobile phone market is an odd and convoluted story, but it's best to view it as a vertical upwardly-mobile extension of the MP3 player market (from their point of view), which has taken on a lucrative life of its own. Apple's unique angle is the user experience. Without OS X to differentiate them from the rest of the market, their computers would just be overpriced PCs. So it should be no surprise that Apple's runaway hit iPhone business team have a single overriding goal: maintain control of the platform and keep it different (and aspirational).

Apple don't want to destroy the telcos; they just want to use them as a conduit to sell their user experience. Google, however, are another matter.

Google is an advertising corporation. Their whole business model is predicated on breaking down barriers to access — barriers which stop the public from accessing rich internet content plastered with Google's ads. Google want the mobile communications industry to switch to Version 2, pure bandwidth competition. In fact, they'd be happiest if the mobile networks would go away, get out of the users' faces and hand out free data terminals with unlimited free bandwidth. More bandwidth, more web browsing, more adverts served, more revenue for Google. Simple!

This is where the Nexus One announced last week may be significant. If the rumours are true — that they're pushing it at a low or subsidized price, and have strong-armed T-Mobile (the weakest of the US cellcos) into providing a cheap data-only mobile tariff for it, and more significantly access to VoIP and cheap international data roaming — then they've got a Trojan horse into the mobile telephony industry.

I think Google are pursuing a grand strategic vision of destroying the cellco's entire business model — of positioning themselves as value-added gatekeepers providing metered access to content — and their second-string model of locking users in by selling them premium handsets (such as the iPhone) on a rolling contract.

They intend to turn 3G data service (and subsequently, LTE) into a commodity, like wifi hotspot service only more widespread and cheaper to get at. They want to get consumers to buy unlocked SIM-free handsets and pick cheap data SIMs. They'd love to move everyone to cheap data SIMs rather than the hideously convoluted legacy voice stacks maintained by the telcos; then they could piggyback Google Voice on it, and ultimately do the Google thing to all your voice messages as well as your email and web access.

(This is, needless to say, going to bring them into conflict with Apple. Hitherto, Apple's iPhone has been good for Google: iPhone users do far more web surfing — and Google ad-eyeballing — than regular phone users. But Apple want to maintain the high quality Apple-centric user experience and sell stuff to their users through the walled garden of the App Store and the iTunes music/video store. Apple are an implicit threat to Google because Google can't slap their ads all over those media. So it's going to end in handbags at dawn ... eventually.)

The real message here is that if Google succeeds, the economic basis of your mobile telephony service in 2019 is going to be unrecognizably different from that of 2009. Some of the biggest names in phone service (T-mobile? Orange? Vodafone? AT&T? Verizon?) are going to go the way of Pan Am and Ma Bell by then; the ones left standing will be the ones with the best infrastructure (hint: that doesn't look like AT&T right now — by some analyses, AT&T mis-understand TCP/IP so badly that their network trouble is self-inflicted) and best interoperability (goodbye Verizon), selling bits at the lowest price to punters who buy their cheap-to-disposable (phones are part of the perpetually deflating consumer elecronics sector; today's $350 BoM should be down to under $100 by 2019, for something a good deal more powerful) unlocked in WalMart and take ditchwater-cheap international roaming service for granted.

Probably around the time VoIP takes over from the current model, we'll see something not unlike DNS emerge for mapping OpenID or other internet identities onto the phone number address space. (God, I hate phone numbers. Running a phone service that forces everyone to use seven to twelve digit numbers is like running an internet that forces everyone to use raw IP addresses.) Then the process will be complete, and things will have come full-circle, and the internet will have eaten the phone system.

What's good for the internet is good for Google. Right now, the phone companies are not good for the internet. If I'm right about the grand strategy, the Googlephone will change that.

|


85 Comments

user-pic
1:

quite the interesting take on things.

and yes, i suspect the change to "dumb pipe" for the telcos (and also cablecos) will be inevitable.

thats how it is with packet switching, it can be carried over just about anything that is stable enough to complete a single packet between failures (if the ttl and latency requirements of the transmission is not to narrow).

this is highly in contrast with the circuit switched systems that telcos are used to, where a cut line equals a failed connection. Hell, i think the story is that when packet switching where first explained to the gray beards of ma bell, they reacted as if heresy...

2:

I can't help but think you are probably right.

As someone working in the device team in one of the big aforementioned telcos, I can tell you this is something concentrating the mind around here!

The trouble is, we've had it our own way for too long. And acquisitions and mergers along the way have left once young lean and entrepreneurial companies run by visionaries whose aim was to do mobile comms differently, in the hands of enormous ex-national PTTs who's idea of innovation is a tariff plan refresh.

There are a few of us who see the writing on the wall and have done for a couple of years now, and if we get our own way, we maybe can bootstrap MobileComs 3.0 (But pessimistically, without the odd Jobs-like visionaries to rally the troops the final decisions will be made by faceless accountants who are sure that doing the same thing but CHEAPER is the way to go... Hopefully my retirement plans will be in place by then!)

But damn it, it's such a shame that quantum entanglement can't provide a viable alternative to radio.

3:

Probably around the time VoIP takes over from the current model, we'll see something not unlike DNS emerge for mapping OpenID or other internet identities onto the phone number address space.

SIP?

4:

If I read (some of the) specs right, LTE is going to be completely packetized. The only way to tell the voice packets from the data packets is by priority (if that). From a networking perspective, charging more for data packets would be asinine, and there's the possibility that someone's going to hack the data packets to look like voice packets and route things through some sort of anonymizer (an idea which appeals to me).

Still, I don't put it beyond the phone companies to try and find new ways to screw us. People ask me why cellphone innovation is so stagnant in the US, and they need look no farther than the cellphone companies themselves.

5:

I read Jonathan Zittrain's The Future of The Internet - And How To Stop It last year, and although I was too young to recall the era of services like CompuServe and Prodigy, there appear to be interesting parallels between the that era and mobile phones, in terms of the move from mobile providers being active suppliers of content to being a dumb pipe connecting the phone to the Internet.

I guess one of the critical factors may have been that earlier mobile phones had truly awful web browsers and very limited screen space, so realistically only the best smartphones could offer a halfway decent web browsing experience. Also, it probably wasn't worth offering a version of a web page optimised for mobile phones at the time, considering how few people used this. Under the circumstances, network operator's walled gardens were probably just about the only place you could reasonably expect to have a remotely decent browsing experience, since they would already be optimised for mobile devices.

Now that products like the iPhone and Android make accessing regular web pages from a mobile browser a practical proposition, users are more inclined to browse the wider web. Naturally they're going to go straight for familiar providers like Facebook, BBC News etc if they have the option. And now these products are fairly common, so it's a worthwhile proposition making mobile-optimised versions of websites since people are going there direct rather than remaining with the provider's network.

You could be right about the DNS thing - maybe Google's recent unveiling of a public DNS service is a step towards that, although I can't see how that might work off the top of my head. Or possibly just doing away with phone numbers in the traditional sense entirely and using a VoIP service, making the phone number something the end user doesn't really need to know under normal circumstances, similar to IP or MAC addresses.

6:

But _everybody_ in the IT industry wants _everybody else's_ product to be a commoodity. If you make video cards, you want big monitors to be a commodity. Google, thanks to all the cash flow from the ad business, is just in the best position to do something about it. The other fun thing about Google is that they don't have "partner" companies in the IT industry, so they don't have to be nice to anyone else in the industry, or even pretend to.

The record companies need a credible second source for online music, to keep Apple iTunes from becoming The Only Record Company, so an Amazon/Android "partnership" could turn into the exception.

7:

1) I'd add Amazon to the Google/Apple mix-- after all, when the ideal customer is persuaded by a Google ad to purchase something, he/she/it buys it at Amazon.com on their mobile Safari browser.

2) Microsoft is so screwed.

user-pic
8:

A lot of the scepticism surrounding Googles Chrome OS is based on the limited bandwidth currently available to consumers. Want to edit a video? Well you can't do that on Chrome OS. Want to encode an MP3? You can't do that on Chrome OS. Want to do ANYTHING that needs decent CPU and local storage? You can't do that on Chrome OS.

BUT.... I can see a future where "dumb pipes" are so vast, 100Mb+, that video encoding etc will be done in the cloud. Your video is already on You Tube in HD, why not process it there. Hell, in reality there is nothing I can't do using Chrome OS if I make use of VM's on Amazon's EC2, I can use RDP (in a web page of cause) to access as much CPU power as I need.

I really do think that Google has it right with this play and their vision doesn't have Microsoft (and "local" storage applications) in it at all.

With this future in mind, I can see most people using 3G / LTE more and more and ADSL/Cable less and less. The prices will dictate uptake of course but consumers want cheap dumb pipes, and before long one of the carriers will break ranks and offer cheaper prices than the rest and from there it's a race to the bottom. I can't wait.

user-pic
9:

Gordon @ 8: there is a limit to over-the-air bandwidth, a tragedy of the commons. Imagine a busy location with lots of people all trying to use their mobies to watch video. There just aren't going to be enough bits available, even with public femtocell setups. The same thing is happening today with WiFi in public spaces; the more people try and use it the less bandwidth available per person. There are hard physical limits to the amount of data that can be crammed into a piece of spectrum and similar limits to the useful range of available frequencies -- radio signals that can't penetrate glass or thin walls or even clothing are not going to be much use for data feeds to mobiles.

What I envisage is a split, where mobie users get the sort of low-speed data feeds they are getting today with the content degrading gracefully depending on the data rates available at that time and place. Home is where the fat pipes deliver the cinema-quality 3-D video at gigabit rates to your eyeballs, and that has to be delivered via wires or optical fibres. The key technology will be smart devices that can make the best effort with whatever data streams it can get hold of rather than throwing its hands up and quitting because the codec can't decode a 1Gbps video stream when it's being fed with a heavily-contended 5Mbps over-the-air GPRS/GSM connection. Fractals and Sierpinski coding schemes are probably the way things will develop -- they need lots of CPU power at the mobie level to run them but they don't need lots of bandwidth to present a comprehensible media feed to the punter holding the phone. The more bandwidth available, the better the picture and no stuttering or frame drops.

10:

MattF @7: judging by the noises coming out of Redmond, and Windows Mobile 6.5, Microsoft really don't understand why people keep buying those pesky iPhones. Institutional inertia, "not invented here", and just plain old-fashioned denial will do for them unless they get a whole lot more agile, and I don't see that happening until they fire Steve Balmer.

WinMo was developed as a Psion and Palm killer in the old days of the late 90s. Psion and Palm have, in due course, died, so innovation stagnated (just as it did in, e.g. desktop proofing tools such as spelling/grammar checkers -- once Microsoft establishes a monopoly the rate of innovation drops through the floor). But the Psion/Palm rivals were predicated on modem-speed mobile bandwidth, not a late noughties rich media experience. I'm not sure Microsoft's management have realized -- internalized at a gut level, assimilated, and begun to plan a response to -- what's going on yet. We'll know they've finally woken up when they release a version of Pocket Internet Explorer that doesn't suck like an elephant's head-cold.

Gordon: some folks need disconnected/non-network-centric machines. Business execs in club class need computers that work even when they're at 40,000 feet over the midwest. The military aren't too happy about sharing their networks with anyone else. I'm a paranoid and I hate the idea of some cloud-based service owning my data. But they (and I) are a minority.

user-pic
11:

Robert @ 9: I think there will always be innovation where there is demand (as there will be money to be made). If we run out of bandwidth consumers will become disgruntled and governments will release more of the spectrum as required (perhaps those pesky voice carriers will be gone by then?). Plus as you say, clever codecs will push more and more data down the same pipes that yesterday looked so narrow.

Charlie @ 10: I can see there will always be a demand for non-cloud based users: military, banks, tin foil hat brigade etc, however for the vast majority of consumers, cloud based services will be just fine. Even the tin foil hat brigade will look with envy at the cloud users when their local machines suffer a HDD crash and they have to restore from backups over the next few painstaking hours (or days).

Although, on the other hand, I can see a data centre outage being headline news as huge numbers of users will be affected. What would hordes of "Manfred Maxx" style glasses users do during an outage? They loose their VoIP, RSS feeds, layar feeds, sat nav etc etc.

user-pic
12:

I am about to drastically cut my telecom costs. I am a SW consultant, so I live and die by by my Internet connection. Cable is the only provider here, so it stays. Cable offers bundled pricing so DIRECTV goes along With my SO's favorite channel. I'll keep Verizon and my cellphone (a cheapie, no texting for me). So who gets cut loose? Why it's the landline. I'll add back a few bucks for the cable company's VOIP, but the big looser and the bulk of my savings is the telco. And I'm just a trailing edge adopter. My kids did this a long time ago.

13:

Gordon: Robert isn't talking about government-mandated frequencies, he's talking about the laws of physics. (We have this conversation down the pub, regularly.) There are only about 2.5THz available for wireless coms -- air is not perfectly transparent to e/m radiation at all wavelengths.

Of course, you can increase the amount of bandwidth available to users by using really short-range (0.1-10 metres) stuff, i.e. femtocells, and feeding them from fibres (each of which gets that 2.5THz to play with -- you can run multiple fibres and multiplex across them, obviously). But over-the-air faces some hard limits.

Note that I have no problem in principle with backing up my data onto a cloud service ... only with relying on it to provide access when I need it. We already have a term for what happens when an internet-mediated service goes pear-shaped: it's called a "cloudburst".

user-pic
14:

I'm obviously a few conversations behind you guys hehe

I never considered the physical limits of the spectrum. What does the 2.5Thz equate to in "real money"? It must be significant if finite. As you mention, perhaps the solution is the density of the cell masts currently available. Perhaps a femtocell in every public building, the cost of this infrastructure would be significant (although peanuts to a company like Google who is currently sitting on $22bn in cash assets).

re: cloudburst, not heard that one before, I like it. Who coined the term?

15:

But those femto cells could offer another possibility. You could try to figure out a rule set that will send packages from point A to point B using a network of femtocells transmitting data in parallel, occasionally concentrated into glass fibre.

Example: you use two directional antennas on node A to transmit data to nodes B and C, which are out of range of each other and may even use omnidirectional antennas to transmit their data to nodes B', B'', B''' etc as well as C', C'' etc. which are received at point D through directional antennas. Some other nodes (E, etc.) will also receive the signal but won't transmit it, unless other connections are choked.

In this case you could (roughly) double your bandwidth, even more than double it if you use more than 2 paths. Simply because all paths are ignorant of each other, until combined.

Don't blame me though if such an arrangement starts to think once a certain complexity has been reached ...

16:

tp1024: doesn't work. (The bottleneck is delivery. The available spectrum over the air isn't extensible; you could in principle send over 2.5Tbps of data to a single femtocell destination via multiple routes, but where the converge, it's going to be a mess.)

17:

One thing you're not noting there is that the OTA bandwidth limit isn't the issue - the backhaul economic problem is.

GPRS and 3G and to an extent LTE don't use standard IP backbone network infrastructure, but their own parallel network -- and with base stations managing (say) a hundred people, half of whom might be equipped with smartphones able to use 3-4Mbps each, that's a lot of bandwidth to have to backhaul. To date, base stations have used T1 lines, which could handle 24 users each for voice - but not even one smartphone. There's a major push on to upgrade the backhaul, and it's so desperate that LTE even allows use of the OTA frequencies to do backhaul by radio for remote basestations; but any real solution drives capex up and companies are very resistant to that, not just because of inertia, but because they cannot make money off 3G/4G data access provision right now.

The market's gone from understood pricing models for voice, to flat-fee models for data, and nobody seems to have thought that out ahead of time - end result, every phone company in the world is now staring at a graph showing the revenue from data growing slowly, and the amount of backhaul needed (and thus the amount of money it costs to meet the demand) growing exponentially. *This year alone* backhaul cost $16 *billion* in the US. Figures are equally dire in the UK and EU. And it's not going to change anytime soon - the prevalance of the iPhone and its new clones (Android, Pre, etc) will see to that.

There *are* plans in LTE to use standard IP infrastructure instead of SS7 for backhaul - but they're all for the future. The deployments that just went live (the first ones) last week in Oslo don't use those plans and nothing going live next year will either.

So basicly, it's not the physical limit that we're going to hit first; it's the economic one. And that odd noise you're hearing from AT&T last week was the initial crunching noise as we hit that limit nose-first...

18:

If this comes to pass and becomes our future reality (not just in the US, but everywhere), the Internet will have swallowed the legacy phone system as well. I, for one, welcome our new telecommunications overlords!

user-pic
19:

Loading everything into the cloud will increase efficiency in several areas but we've already had warning lessons that with that efficiency comes risk - Sidekick and The Googlemail outages are just warnings.

My personal comfort zone means that I'm willing to back some bits to the cloud (and more in encrypted form) but my data security will, for the near term at least, remain principally with multiple copies: on work system, high capacity flash, RAIDed NAS and offsite archives on optical media (tape based systems make me uncomfortable). Ok this makes it harder to reconstruct than a 'one touch' backp and restore but that's me.

btw, I like 'Cloudburst'.

20:

@16:

You don't need to extend the spectrum, it's enough to narrow the angle from which you receive.

I.e. you could get 2.5 Tbps from a node in front of you and another 2.5 Tbps from one behind you, if you make sure that none of the signal in front of you will reach the receiver pointed to the node behind you.

It *would* be a mess if you tried to receive any signals from all those nodes at once with a single omnidirectional antenna.

21:

as a joebloggs ordinary user I love the quality of apple but prefer the dumb pipe approach rather than the 'do as I say' one. I don't mind marketing as long as it doesn't take over and it keeps the content cheaper. I worry that in the UK this connectivty will never be available except in urban areas, but who knows what the future holds. Hopefully the politicians will start to 'get IT' and sort out the inefficient telcos and build us a proper NGA network...

user-pic
22:
we'll see something not unlike DNS emerge for mapping OpenID or other internet identities onto the phone number address space

It's E.164. Telephone numbers were mapped into DNS back in the early 00s; the only reason it doesn't work on many numbers today is because telcos are actively sabotaging voip for all the reasons you describe.

23:

This commoditization game of chicken played itself out once before when VoIP destroyed the long-distance market and then sucked most of the margins out of the local-exchange land-line market. The IXCs and PTTs were a bit naive back then in the good old days (1999), but I can't see the mobile operators making the same mistake again.

On the other hand, the situation is at best metastable. It only takes one operator to scream, "I'm goin' down and I'm takin' the rest of you with me! Yaaaaaaahahaha!" and the commoditization snowball will start down the hill. Maybe Google is whispering Poisonous Nothings into T-Mobile's ear in hopes of that very thing happening, but they'd almost have to be willing to buy them out to get it to happen. (Now there's an interesting US FTC/FCC turf war...)

The other thing militating against your theory being right at present is that high-density VoIP over existing mobile networks is going to be iffy until the operators get the IP capacity problems fixed. The 3G packet loss is bad enough for RTP (lots of dropouts), but the high jitter makes this even worse by inducing very high mouth-to-ear delay because the receiver has to buffer a lot more to survive the jitter. Nailed-up voice channels don't have the same problems.

Indeed, my own pet conspiracy theory on why AT&T went limp on 3G VoIP when Google complained to the FCC is that they wanted to give VoIP a bad reputation early in the process, in hopes that it would make their voice channels stickier for longer. No evidence to back this up yet, but it's a little to early to tell.

----------

Giacomo @ 3: SIP doesn't really solve the problem. Yes, you can use alphanumeric URIs, but the identification, authentication, and (most of) the routing is undefined by SIP. More likely, this gets solved by enough services like Google Voice and Skype federating with each other, until there's enough coverage that E.164 numbers shuffle off to the dustbin of history.

24:

Thank Dawkins that someone else has seen the same thing.

I've been trying to convince people that the rumours of the Googlephone were a potential gamechanger and that conventional companies should be running scared. I'd come up with virtually the same model (a tweak or two, later), but they are still just fixated on Motorola Droid and not the grenade that Google could be about to lob.

Once the number/ID/service provision is divorced from the communications provider it a piece of **** to see how new competitors can enter the bandwidth provision business. Doesn't matter what it is, where it is, what it covers, if your handset can talk to it, it can be used (and charged for). That means that the pico stuff, the WiMax stuff, other weird and wonderfuls can all blossom. If there is someone smart they will be looking at wifi'ing the tube right now.

Obviously your home phone goes away and the same mobile handset does everything. Seamless transition and intelligent agents for pricing means there needs to be a standard for announcing that pricing. Selling your home bandwidth, particularly if you are well positioned, becomes a nice little earner until the bandwidth price falls to zero, for a share of the ad revenue.

And the last little tweak? These devices are position aware and basically capable of a degree of augmented reality today. So if you give away the augmented reality viewer with the handset (and say your name is Google), does that mean you can then sell advertising space in locations, and over the top, of physical reality?

Buckingham Palace, bought to you by Burger King?

Drive down the road with Google Navigator and see virtual billboards in your 'Streetview' of your next turning?

user-pic
25:

iirc, there recently was news about telcos getting together and hammering out a way to deliver existing phone services over LTE (yep, most likely carrying the old line switched stuff on top of a packet switched stuff, rather then set up some kind of old to sip bridge).

as for chrome os, there is a couple of things to consider:

1) it will not be running on hardware that can handle much in the way of media editing in the first place.

2) it will in the short term be google gears focused (this is a system that allows compatible pages to work in offline mode, so that one can write a mail in gmail without being online or similar). And in the long term move to html5, as it supports much the same as google gears.

still, videos i have seen of chromium os in operation (the open source branch of chrome os), shows that it can handle stuff like local storage, poping up a folder tree when a usb drive is inserted for example...

so it would not surprise me if some enterprising individual finds a way to combo chrome os (or chromium os) with existing linux software for offline work. Hell, if they can divorce the background libs from the interface, and make available a javascript API, one can do a lot of stuff in a web browser. Perfect example, palm pre, as its basically a linux kernel, the webkit browser engine, and a whole lot of custom javascript API.

on that note, palm recently unveiled ares.palm.com, a page where one can use the browser to build a pre app, interface, code and all.

user-pic
26:

@16: @19: Ye canna break the laws of physics, but you can bend them a bit if you throw enough computronium at them... modern modems can encode and decode up to 16 bits per cycle of carrier using phase, amplitude and polarity. Noise, attenuation, capacitive/inductive phase shifts etc. eat into that perfect situation requiring error correction but the PCI-card V.90 modems that cost about a tenner each quantity 100,000 FOB China were transmitting and receiving 40-50kbps reliably over a POTS voice channel that is/was filtered to 3.5kHz. ADSL and cable use the same techniques, just with wider frequency ranges -- a typical ADSL circuit from the exchange DSLAM to the modem in the home has a bandwidth of about 2.5-3MHz and can carry 24Mbps of data in perfect conditions (which never exist in reality). FTTCabinet might improve that by reducing the last-step wirehaul from hundreds of metres to tens of metres, but it is probably affordable unlike the rewiring necessary for implementing widespread FTTHome.

If you had 2.5THz of radio spectrum to play with in the open then that could theoretically carry 40Tbps of data. Assuming a femtocell was handshaking with a hundred people's phones that gives each user 400Gbps which should be enough for anybody.

Unfortunately other people also want chunks of that spectrum -- radar transmitters, microwave towers, OTA radio and TV transmissions, WiFi, Bluetooth, radio amateurs, satellite uplink and downlink frequencies, military channels, civilian rescue services, marine and aviation radio channels etc. etc. Oh and mobile phone users too, come to think of it. In reality it would be very unlikely for mobile data to get as much as 100MHz of usable frequency space and everybody in range has to use that allocation at the same time cutting into your viewing pleasure.

As someone else said the backhaul is where the mobile operators are really hurting, and it's not just the cabling, it's the switching and routing required to support lots of data. It's not purely a problem for the mobile operators, all of the ISPs and cable companies are finding it difficult to supply the demand for data from their customers but they are already in the data-pushing business and have made some kind of plan for the future. The mobile companies have only gotten to the whiteboard stage as far as I can tell and when the expected uptake of data services goes asymptotic in the near future they (and their suffering customers) are in for a rude shock.

27:

@22: It only takes one operator to scream, "I'm goin' down and I'm takin' the rest of you with me! Yaaaaaaahahaha!" and the commoditization snowball will start down the hill. Maybe Google is whispering Poisonous Nothings into T-Mobile's ear in hopes of that very thing happening, but they'd almost have to be willing to buy them out to get it to happen.

Or threaten to their competitors that they would buy them. If Google owned a mobile operator, it could send prices through the floor, make loads of money (through advertising) and bankrupt the other telcos all in one.

28:

A fascinating post and interesting comments. To be a bit of a devil's advocate, other than Search and Maps, what has Google proven it can do better than anyone else? I would say its other products aren't so great. Look at YouTube. Since Google bought the company, what breakthroughs have we seen? There are outstanding software developers at Google, and some smart top managers, but they run the risk today of trying to do too many things. Maybe it's only me, but I see centripetal forces spinning up there right now. And they're alienating just about every other tech, content and services company on earth. Not to mention regulators. In an era where it's all about collaboration, they're more a Borg than Microsoft was in its worst monopolistic days. And let's not forget, there's an ugly underside to their great success in advertising, and that's invasion of privacy. Watch for a backlash against that, which would hit their cash generator funding everything else. One last thing: what about customer service? Google hasn't had to do that. I'm not so sure when they start producing devices, or when they need to step in and own networks because carriers fight them or they bankrupt carriers, who's going to get on the phone or a IM session or roll a truck to someone's house if a problem arises? I don't believe Google has the DNA to pull that off.

user-pic
29:

I think tps1024 is talking about MIMO

http://www.electronicsweekly.com/Articles/2008/03/28/43424/a-rough-guide-to-mimo.htm

You can do directionality with multiple antennae.

30:

While I don't think that the scenario described is the one that will play out, I absolutely agree that the telcos are dying dinosaurs. I think the real breakthrough happens when mobile devices just self-organize into networks, communicating with dumb pipes at access points for long distance routing. Each device becomes a full service node on the network, delivering routed content originated from any other device. All we need is fairly efficient routing algorithms for mobile and better power storage for mobile devices.

Obsolete cell towers will just rust in moonlight.

31:

There is a protocol to map phone numbers to DNS, it is called ENUM. The main problem with ENUM isn't technical (BIND already supports it, as do most other DNS software packages) but rather legislative.

The Telcos decided on a "market solution" for control ENUM, i.e. they figured that one of them would win the battle for ENUM and dominate the market. That would be extremely lucrative, since they would become gatekeepers to VoIP services for everyone. Instead, ENUM is extremely fractured. Each telco has their own ENUM servers which don't interact with other telco's servers. There are some free offerings, but they aren't used by the telcos.

Someone is going to have to create a new federated identity offering, and it could easily be Skype and Google Voice. Treating phone numbers as IP addresses is obvious, trivial, and already possible. The problem is that telcos are loathe to do it, since it loosens their strangle hold on end-users. Just as they opposed "number portability" in the US, they are opposed to federated identity.

To some degree, it doesn't matter. A federated identity is inevitable. It is just a matter of who controls it.

user-pic
32:

@26: If Google did buy a Cellco, and used it as a loss leader to bury the rest of the carriers, that would be an eerie parallel of what MS did to Netscape in the browser market.

33:

I'm failing to see how wireless bandwidth becoming a commodity sinks the telcos. The current telcos will still be the ones providing that commodity. Exxon does pretty well providing a commodity right now. T-Mobile's network can't handle close to the amount of data moving over AT&T's network right now so forget this Google buying T-Mobile idea being a game changer. What am I missing here?

34:

it is certainly good that mobile phone become normal internet devices.
The only what I am afraid off are privacy problems. Google collects huge amount if information: Google Search, Google Maps, Google Checkout, Google News, Google Ads, Google My Tracks, Google Goggles etc and the amount and value of information they collect increase substantially when I use Google Search/Maps and other numerous application on my mobile phone in every day life.
What about provides? May they log all my data from all my application by logging
network packets?
So with the advent of mobile cells as mobile ubiqous computers, several organizations (Google, Verizon etc) got an infinite power to collect data about me.
It frightens me

35:

Uh, oh. This page hit 10 on Reddit front page so far. Duck for cover.

36:

It seems like there would be some speedbumps for Google in the form of other VOIP providers and smooth interaction between them. Google can't dominate the market without running into anti-trust laws in the US. Probably a requirement that their hardware and software work seemlessly with anyone elses. There's Skype to consider, and whatever Microsoft will unveil at the last minute, as well as those legacy users who won't stop using traditional phones. All of those will need to work together smoothly.

37:

@32: Those are different types of commodities. In your example, one is information based (cellcos), the other is physical goods (exxon). Oil has an actual cost of production. Pulling a barrel of oil, shipping, refining, and shipping it again has an inherent per unit cost, and the resource itself is limited.

Information is different as a commodity. Sending 1 bit basically has no direct cost associated to it. Nearly everything stems back to the infrastructure costs. Operating costs are pretty minor in comparison. As such, whenever you have a situation where your pricing is primarily based upon fixed costs and amortization of infrastructure capital costs, with no real per unit marginal cost, the price invariably ends up plummeting as performance per price of technology increases, service offerings become standardized, and it results into a race to the bottom.

So far, no segment of the information and network economy has been able to avoid this phenomenon very successfully, so there's little reason to suspect that the cellcos will be able to. For more info, Shapiro & Varian's "Information Rules" is arguably the go to book on this topic.

38:

(mobile) telcos certainly are not good for the internet. as iphone is aptly demonstrating, a cellular network is just another way to access the internet.

there is no need for telco to do anything other than provide access to internet.

their so called "internet gateways" (eg wap browsing) a la AOL or compuserve, impede access to internet.

their bundling and modifying of mobile devices to have the appearance of adding value, really just serve to impede access to internet.

the end game, was always going to be that they would be simply providers of access like ISPs.

so, telcos try squeeze every last bit of revenue they can from exising infrastructure, maintaining corrals around data access aka access to internet until inevitable arrives.

the point is this is not a technology issue. it is age old lack of competion, market share held by revenue maximizing players. it is all about stakeholders and investors demands for maxxing ROI,shareholders demands for stock growth & dividend.

the point is that there IS another business model available, but it takes bolder, hungrier, heretofore disenfranchised players to realize it.

more to the point is that the technology exists that could deliver something that promises what we read about in some of Charlie's books. if only current players to leap past 1-5 year time frame .. but if any player were to do that .. they would take the market for a good period of time ..

39:

Thanks Charlie for this post, a sound and useful overview. A couple of things to consider in addition:

a) Can Google get the Android experience rounded off smoothly enough for it to work "as a phone" day to day, given the Android dev model? (I'm thinking about the possible weakness of an overall design authority such as Steve Jobs). 'Beta' isn't good enough this time - I doubt the Google Phone can be offered Free of Charge. I'd say that's one of the biggest risks with Google's foray into the physical products world.

b) Fixed line telco broadband services went through similar commoditisation a few years back here in the UK. British Telecom had already laid the cable decades ago so the broadband providers themselves didn't have to bear the true capital cost of providing the service. The capital outlay in building a wireless network and the cost of spectrum licenses are huge. Competing directly with fixed line connectivity on price may not be possible given that infrastructure cost base. I guess this is part of the fear for telcos. We may see differential pricing models based on location that puts value on the 'mobile' nature of the network when you are on the move (rather than at home / in the office / in big cities with other options such as Wifi). But can they get those models right and will consumers accept them - I'm not so sure.

c) In post 27 it was questioned what service had Google ever actually got right. I personally think GMail is a great service better than the competition, and Chrome browser is noticeably better for me. But for many people quality and indeed privacy are often a function of price - and Free is quite compelling. I doubt the Google Phone can be offered without charge so it needs to be good, see a) above.

Regardless of the above, I agree the Google Phone if it comes is the beginning of the nightmare that telcos have known is coming but so far haven't found a way to work with or against.

40:

I had a certain familiarity with the SymbianOS kernel.

I would say there for them was a certain inevitability in their demise, because their basic kernel model was wrong (microkernel - with all the performance issues where you keep asking kernel threads to do work for you - and written in C++ with inheritence AND a requirement for binary backwards compatability with binaries compiled for earlier OS releases!) and that ultimately doomed them; it's impossible with that kernel to be performant and they ended up putting lots of effort into stuff other people didn't have had to worry about at all.

I mean, for example, in the earlier releases, there was *one* kernel thread for disk device access - for ALL disks. That was improved to the point where there was then one thread *per* disk device...! Every time you want something from the disk, your user-mode thread handed a request over to the kernel (context-switch) and then waited for it to reply. And it was like that for *everything*, disk, video, network...

Also, the company interally was STUPENDOUSLY bureaucratic (I can't imagine for a second how that would have changed) and that was strangling them.

41:

They (Apple) own about 12% of the PC market in the USA ... but 91% of the high end of the PC market (laptops over $999, desktops over $699).

Not that it invalidates your point or anything, but does charging more for equivalent product really make them "high-end"? Of the price heap, I suppose...

42:

Connectivity isn't free, and it never will be. Running a cellular network is very different from plugging in a WLAN router; for some reason, IT people are afflicted by the delusion that radio is easy. This has caused a string of disastrous "woot! wifi!" startups.

AT&T, for example, is having serious issues provisioning enough capacity to support the iPhone demand; T-Mobile .nl reckons demand for data went up 640% when they launched the shiny gadget. In the UK, we've had a similar experience but driven by all those Huawei e220 data dongles stuck into youtube-streaming lappies.

Historically, the majority of new capacity created in mobile hasn't come from improved radio air interface technology or from spectrum acquisition, but from cell subdivision. You can't create cells without footprints (and you have to pay rent...) and backhaul capacity. And you have to pay BT for your leased lines.

As usage goes up, you've got to get good backhaul as well - when I got into this industry, the rule of thumb was that one cell equals 2 E-1 leased lines (i.e. 2 2Mbits symmetric links). These days, you put in Gigabit Ethernet by default, unless you're working in Afghanistan in which case it's all about satellites.

Eventually, the money going into the telecoms industry has to equal the amount that goes to pay for civil works - men called Dave digging up the roads. It's like the old pilots' crack that the secret to success in aviation is keeping the number of take-offs equal to the number of landings.

user-pic
43:

Charlie in preamble, and @ 40 ....
"Without OS X to differentiate them from the rest of the market, their computers would just be overpriced PCs."

Sorry, but Apple ARE an overpriced gadget supplier.
Which is why I won't touch tem.
I agree Windows/MS are shite, but hwat else does one do?
I'm too old and tired to start re-learning modern forms of UNIX and programming all of it myself.....

user-pic
44:

Tetragramm@35 (and everybody else):

Add Slashdot to the incoming crowd...

45:

@9:

I guess I got something wrong here that I want to straighten out. In how far is what you said comparable to the following model?

Lots of people are in one room. On one side of the room there are A and B, on the other side there are X and Y. A wants to talk to X, and B wants to talk to Y.

--

Your model, as I understood it:

A starts to shout across the room, so does X. They can talk. Now B starts to shout across the room as well. But now neither can have a conversation because the max bandwidth has been reached.

Now, lets say A and X have a very high voice, while B and Y have very low voices. In this case both conversations can go on in parallel. Until, that is, C and Z want to have a conversation across the room as well ...

--

And here is my model:

There are lots of people in the room. Most of them zombies with no interest in a conversation of their own. But all of them know (roughly) who and where A,B,C,X,Y and Z are.

Again, A wants to talk to X , so he starts to whisper his message to the zombies near him and adds please tell the next zombie that he should pass on my message to someone closer to X. Now only the zombie closest to X will start to whisper to the zombies near him ... you know the drill.

That way B could have a conversation with Y and C with Z. Simply because nobody is shouting ...

If A passes along two messages to two different zombies at once, X could even get two messages at once, because he has two ears and a brain that can deal with it. (4 messages if A can talk in a high pitched and a low pitched voice at once etc. as you mentioned.)

--

Shouldn't that make much better use of the spectrum, or is that what you meant with the public femtocell setup?

46:

What are your thoughts on the White Spaces playing into this debate? To me, they will eventually compete with the cellcos because they are basically Wi-Fi hotspots with much longer range. If you're driving around town on your mobile phone and it can automatically jump from White-Fi hotspot to White-Fi hotspot the same as it can jump from cell tower to cell tower, you've basically got free Internet (assuming that when White-Fi routers come out they make allowances for some sort of unrestricted public use).

Of course, the bigger question in all this is why do AT&T and other large companies have the ownership rights to a public resource, the airwaves? Originally this was because the technology wasn't sophisticated enough to allow multiple people to operate on those airwaves at the same time, but that is just not the case anymore with our far more sophisticated modern technologies.

47:

Remember all that talk about the White Space spectrum opening up?

Who was a major player in getting the FCC to move in the right direction? Google. Who has been buying up dark fiber to build out their own backend network? Google again. Who's going to leapfrog the cellco's and make them unnecessary if they choose not to play along? Starts with a G, ends with a oogle.

Kudos to them on a very well played game of chess.

48:

I think you're missing the point about Apple. How would it hurt Apple if the telcos became commodities? They can sell unlocked iPhones just as easily as Google could sell unlocked Androids? And Apple sells is "Apple experiece" on desktops and laptops that run on the same communication channel (i.e., the internet) as "ordinary" PCs.

I think Apple would be happy if Google's cunning plan (as described above) were to succeed. Right now, they have to tiptoe around AT&T the way they had to tiptoe around the record labels when iTunes was first announce. Fast forward to the present and DRM is gone from iTunes and Apple is becoming an 800# gorilla in the music business.

49:

Apple are an implicit threat to Google because Google can't slap their ads all over those media

Except that Google just bought AdMob (apparently snatching it away from Apple, who also wanted it), which serves ads to many free apps on the iPhone. And if they hadn't, they could have built an alternative and crushed them.

I'm not sure fee vs free is necessarily a conflict. Some people will choose to pay money for stuff upfront, some will refuse to and only pay by doling out their attention to advertisers. Most of us will probably do both to some extent.

50:

Ross: Apple maintain a profit margin on their hardware products which is the envy of the PC biz -- typically 20-30%, compared to the 10-11% margin everyone else makes. They also take a 30% cut of everything that goes through the iTunes/app store(s). They haven't produced a netbook because it would condemn them to PC-biz profit margins and cannibalize their low-end sales, reducing their profitability. Similarly, the iPhone is actually quite an expensive gizmo, if you want to buy one unlocked -- on the order of $600-700!

If consumers generally bear the full, unsubsidized-by-carrier-lock-in price of their handsets, then handset prices are going to get into a deflationary competitive cycle. Which would be very bad for the iPhone's hardware-side profits. If they can grow their media and content biz to take over the hardware subsidy, all's well and good(ish) ... but they're still vulnerable to cheap open standards-based DRM-free sales direct by content producers. Musicians currently need the iTunes store because it's a gatekeeper to about 90% of online mp3 sales. But once we begin losing the chokepoints and gatekeepers, who knows where things will stop?

51:

What are your thoughts on the White Spaces playing into this debate? To me, they will eventually compete with the cellcos because they are basically Wi-Fi hotspots with much longer range. If you're driving around town on your mobile phone and it can automatically jump from White-Fi hotspot to White-Fi hotspot the same as it can jump from cell tower to cell tower, you've basically got free Internet (assuming that when White-Fi routers come out they make allowances for some sort of unrestricted public use).

Assuming also that somehow you get free backhaul, free power, free site rental, free metro networking, free backbone, free transit, free peering, and free engineers! I've heard this said about WLAN itself, about WiMAX, and even about Bluetooth for access points (FAIL).

White space spectrum is more spectrum. It isn't a technology in itself, and it certainly doesn't imply anything about different terms of use. And, as we have seen, more spectrum doesn't do anything like more cellsites does. Further, there's more bandwidth in a strand of fibre than in the whole radio spectrum.

Also, regarding MIMO and LTE - there's a presentation from an Agilent Technologies guy called Moray Ramsey knocking around, about the early results from LTE bench testing. It's nowhere near as good as they make out, and in fact recent HSPA radios are much better. Part of the problem is that MIMO requires orthogonality between the multipaths to work, and once you have much less than the size of a laptop between the antennas, you start to get crosstalk between them - the idea is that the info payload is split between paths between independent antennas. Therefore, as the content of path 2 is Shannon-unexpected with regard to 1, you should be able to combine them without losing information.

But if you've got bits from 1 in 2 and vice versa, you get a net loss of Shannon information (the output is conceptually the intersection of the two not the union, which is what you wanted!).

52:

[ VACUOUS TROLLING DELETED BY MODERATOR ]

53:

Comment on the AT&T network problems, bimodal ping, and network design issue:

Since AT&T apparently hasn't chimed in to explain their problem, I'll hypothesize wildly that rather than the proffered theory of "huge buffers" - which are expensive to put into very high-speed routers like Junipers or Cisco's high-end line - the problem is a different one.

I see one plausible candidate:
* IP transport over a protocol-agnostic low-level network with small frame/packet sizes, such as ATM.

I'm sure some here remember the hey-day of ATM LANs, ca. '95. ATM had all kinds of nice properties, but it had one which makes it nasty for running IP over it, and especially TCP/IP: ATM cell payload size is a fixed 48 bytes. You can't fit even the smallest IP packet into it, and if you use a typical IP packet size of roughly 1500 bytes, it takes up 31 ATM cells.

If your network is not at 100%, no problem. If your network gets saturated however, and starts randomly dropping cells, the drop rate for ATM cells is amplified roughly 30x for IP packets. A 0.5% ATM cell loss rate - barely noticeable in the direct voice calls it was designed to handle - is statistically amplified to a 15% IP packet loss rate. That's beyond noticeable and into the "horrible" zone for most IP applications. The TCP retransmit parameters ensure that the systems on both endpoints of the link respond by retransmitting all the dropped packets, which exacerbates the saturation, which makes the packet loss worse until you hit some kind of metastable point at which a sufficient number of applications or end users are giving up in frustration. Does this sound familiar? It's one of the reasons you have Ethernet at your desktop, not ATM.

The networks still use ATM of course, but have evolved ways of dealing with it, usually by running high-bandwidth IP over virtual links directly into SONET/SDH, or creating fixed-capacity dedicated virtual circuits within ATM when it has to run over ATM.

If AT&T's wireless people were thinking "these are just cell phones, we don't have to deal with more expensive provisioning practices"... they could have run right into that same problem. Note that it's not just ATM - if they use any low-level network with a small cell/packet size and fragment the IP packets over, it will have the same macro performance characteristics.

This theory requires that the team which does AT&T's wireless network design is sufficiently isolated from the team which handles AT&T's core IP network and does not contain any ranking network design veterans who would remember running into this. In a company this size - particularly with the wireless network having ping-ponged back and forth in ownership over a number of years - that's not unlikely.

(If it isn't obvious, this is all crackpot speculation and shouldn't be taken too seriously.)

54:

Clifton: your ATM hypothesis makes sense, and has the added benefit of explaining origins of the the mess in terms of internal politics. (Putting big buffers in routers -- the question of why? does indeed spring to mind. Bad design decisions emerging from blinkered refusal to consider changing future customer requirements -- much more plausible.)

55:

To the commenter upstream - yes, LTE is expected to carry IP directly and use it for everything, including voice. It looks very promising, and is evolving very fast.

user-pic
56:

Thanks for your very interesting thoughts. However, I think like -31 that the main question is customer identity control. In my opinion, a question is much more crucial than the control of DNS -ENUM/ OpenID mapping issue....Who will control the real identity of the customers?
Up till now, it is still mobile operators with their strong authentication systems ----mainly based on SIM cards (/HLR) and SS7 signaling infrastructure.
Telcos are still the only company which can really authenticate you. Why? because you provide them with personal credentials i.e. electricity invoice copies, etc……and because they can securely match these credentials with your hardware SIM card.

In an open Internet world, Google neither FB, nor twitter can guarantee your identity.
Impersonation is so common, hacking a Google, FB account is so easy, even hacking a banking account through phishing.
In the new Internet world anybody can impersonate your identity. Stealing a login/passwd is common and easy. Even, HTTPS combined with credit card information are no more secure: -rogue certificate authorities, SSL/TLS/RSA cracks and - Millions of real credit card numbers available at low cost ~ $0.1 on underground markets.
It is the reason why, new regulations at least in Europe will force ecommerce players to change their payment process. Actually, start mid-2010 in Europe, payment operators and banks will force their customers to use a second factors authentication to validate ecommerce transaction over the Internet.

Now, guess who can enable (overnight) mass market two-factor authentication?
This is mobile operators, directly or indirectly through a code sent through SMS and why? basically because there is an hardware SIM card behind SMS.
BTW: Google and FB use the same approach when they need to challenge a user identity.

It just shows that the mapping between your phone number and your identity is not critical, neither who will provide the VOIP infrastructure. So, regardless of ENUM or open id mapping features, what will be the new authentication and signaling infrastructure vs. SIM Card and SS7 ?

Mobile operators right now look at full IP based infrastructure and IMS standards as a way to balance Google, Apple and Facebook.. power play. SIP signaling infrastructure,new SIM card/virtual SIM generations should play a role to secure and orchestrate the cloud.
Moreover, it will be crucial to compose new services --orchestrated mix of telecoms and web services in the cloud--. Upcoming pre-IMS service composition/orchestration infrastructure will rely on it (same for related charging orchestration).
So, the question for operators now is: how will they leverage their key asset i.e. the SIM card in the cloud but also can they maintain this asset down the road. Whatever VOIP will be operated in the cloud or not.

Apple and Google start to challenge operators on those topics -------- for now, just by locking on iPhone and Android the access to the SIM card.
Knowing that smartphone’s sales will overtake mobile phones sales by 2012 (infonetics research). It starts to be an issue…

57:

A change in degree is a change in kind. Telephony is just a particular application -- conversational audio streams. The larger issue is what do we do when telephony is no longer special. I'm using the term Ambient Connectivity for connectivity beyond today's billable paths.

It's not just that the Telcos want control over the experience. They require bits be scarce in order to maintain their business model which is based on forcing us to buy their services (http://rmf.vc/?n=AS).

58:

I'm too young to remember it, but the parallels between the incumbent network providers not wanting to embrace TCP/IP in the 80's is striking. Gosh, how many of them still have a network business?

59:

Various points:

Robert is spot on about the Spectrum limits. Even then, the other concern I'm hearing at the industry stuff I go to is the backhaul for increased mobile data traffic especially in the US where a lot of home internet broadband is pretty crap to start with.

@tdolye: Getting bandwidth does not a ubiquitous mobile network make. People tend to forget that. Having all the back haul doesn't help with running a radio based communication network and it could be a step too far even for Google. Going from being a smart software company to a physical infrastructure provider with the associated physical regional staff can do interesting things to your market valuation.

The throttle at the moment for Google remains their lack of understanding of the consumer electronics space, something they share with much of Microsoft (excluding the xBox team). It will be interesting to see how the Nexus actually is as a device - HTC can build phones but they tend to have huge build quality issues.

The problem in this case isn't going to be the celcos but the incumbent OEMs like Nokia and Samsung, not to mention the contradiction at the heart of HTC still trying to work out if it's an ODM or an OEM.

user-pic
60:

Alex: Thanks, I'll have a look for the paper. I'm unsurprised that MIMO isn't good for mobile telephony with a single small brick. I was remembering some fluff piece from probably about 10 years ago from new scientist about MIMO, probably the BLAST stuff. I didn't even know it was being used in LTE until I started googling.

I was thinking more in the long term theoretical limits of the bandwidth. Spatial multiplexing is a better wikipedia article for describing the reuse of spectrum. I think of it as similar to how we use our ears to listen to people in noisy environments.

Perhaps we'll get to the stage where we have a separate antennae we bring along in our other pocket and communicate via low power UWB their data, or our devices can work with others in the area to be each other's antennae.

61:

Curiously, I've been trying to think through other sides of the same situation.
The smart phone has swallowed the PDA like the great fish did Jonah. Unfortunately, the side of my head is not shaped like a deck of cards. What I really want it something like a smart phone which is an intelligent remote to the laptop in my bag. But I'm not a large enough demographic to be a market.
OK, the smart phone is the true Second Generation PC. It's for everything you do when you're not at work (for broad values of "you"; and including an increasing range of work info-activities).

In the office, you use Office™. Microsoft no longer rules the universe. The universe has undergone inflationary expansion, and Microsoft is left in a stagnant galaxy . . .

user-pic
62:

http://en.wikipedia.org/wiki/Ultra-wideband

I wonder if ultra wideband can save ups (moblie bandwidth wise)? On the other hand, it is a new way to think about radio, not just a new tech, so adoption.....

user-pic
63:

Charlie@13:

There are only about 2.5THz available for wireless coms -- air is not perfectly transparent to e/m radiation at all wavelengths.

Of course, you can increase the amount of bandwidth available to users by using really short-range (0.1-10 metres) stuff, i.e. femtocells, and feeding them from fibres (each of which gets that 2.5THz to play with -- you can run multiple fibres and multiplex across them, obviously). But over-the-air faces some hard limits.

Is there some physics reason why you can't go to higher frequencies? I would think IR(around 10 micrometers) would work just fine and the technology to use it has been there for years. Past that, there is visible light. Is there something obvious I'm not seeing?

Robert@26:

Ye canna break the laws of physics, but you can bend them a bit if you throw enough computronium at them... modern modems can encode and decode up to 16 bits per cycle of carrier using phase, amplitude and polarity.

John Schilling would disagree vehemently with you on that one, but when all is said and done, you're still only looking at a ten-fold gain.

That being said, what does 2.5 THz get you? Quite a lot, I would imagine. If your algorithms were clever enough. I'm assuming it's video that's the real killer, right? I don't do a lot of video, but I'm older and maybe an outlier.

user-pic
64:

ScentOfViolets@63 The technical problem is initially higher frequencies have line of sight and penetration issues then you get into the infra red, visible, UV, Xray and Gamma frequency ranges!

user-pic
65:

alan ginsberg once said
that the universe is a new flower

for some reason that is all i have to say
after reading your wonderful discussion

man i love watching this all unfold
it is so beautiful

user-pic
66:

Oops sorry not sure what happened there.

ScentOfViolets@63 The technical problem is initially higher frequencies have line of sight and penetration issues then you get into the infra red, visible, UV, Xray and Gamma frequency ranges

The Infra red spectrum is massively noisy, it's good for line of sight, short range potentially very high capacity. But it's easy to swamp/block and hard to make work over more than very short ranges.

I'm not engineer enough to comment on phase, amplitude cross polarisation and other coding schemes of squeezing information into limited bandwidth. However there are schemes which appear to break the constraints of physics - so far they are hype and puff sold to investors ignorant of physics. Sceptical engineers haven't been able to see more than carefully pre-prepared presentations with no realtime quantitative measurements. It's not only the experts who smell snake oil here.

67:

BTW: Ramsay s/b Rumney, and the presentation is on slideshare here.

user-pic
68:

At least several posters have spotted the opportunity / problem of going to higher frequencies (shorter wavelength) to get around the bandwidth problem.
This is why, of course, visiible/IR light is so good in "optical" cable...
But when you get to the end of the cable, and an air-gap - what then?

Do you have phones with TWO receptors - radio for the "country" and light - or "close-to-visible" in the towns ????

Low-power lasers operating just outside the visible??
Teraherz waves, as used for body-scan surveillance??

Um errr ....

69:

Forbes had a piece covering some of the same turf at least, a month or two back.

http://www.forbes.com/forbes/2009/1116/technology-mobile-4G-telephony-metropcs.html

user-pic
70:

Most of the posts here and on Slashdot (http://mobile.slashdot.org/story/09/12/19/2230246/Making-Sense-of-the-Cellphone-Landscape) seem to support the original assertion made by Charlie.

"They [Google] intend to turn 3G data service (and subsequently, LTE) into a commodity, like wifi hotspot service only more widespread and cheaper to get at. They want to get consumers to buy unlocked SIM-free handsets and pick cheap data SIMs. They'd love to move everyone to cheap data SIMs rather than the hideously convoluted legacy voice stacks maintained by the telcos; then they could piggyback Google Voice on it, and ultimately do the Google thing to all your voice messages as well as your email and web access."

Tom in comment 37 makes an economic case to support Charlie's assertion:

"Information is different as a commodity. Sending 1 bit basically has no direct cost associated to it. Nearly everything stems back to the infrastructure costs. Operating costs are pretty minor in comparison. As such, whenever you have a situation where your pricing is primarily based upon fixed costs and amortization of infrastructure capital costs, with no real per unit marginal cost, the price invariably ends up plummeting as performance per price of technology increases, service offerings become standardized, and it results into a race to the bottom."

I do not believe Google will succeed in turning the mobile network operators (MNOs) into cheap data providers by driving the MNOs to commodization. The service provide by the MNOs is not bits through the air "with no real per unit marginal cost." The core service provided by the MNOs is access to the mobile spectrum. This core service will become more valuable over time and combined with additional services (voice, Internet, video on demand, mobile banking, financial transactions, identity transactions, new advertising models, etc.) will insure the long term success of the MNOs.

Either directly through partnerships or indirectly through data charges, the MNOs will participate in all revenues that flow through their networks.

There is a key insight missed by Charlie and others who have posted on this topic: Unlike cable and fiber which in theory could be laid in infinite amounts, spectrum bandwidth is a finite resource and the dominant MNOs have already been awarded incredibly valuable allocations.

An idea of the complexities of frequency allocation can be gained by viewing frequency allocation charts:

U.S. Frequency Allocations
http://www.ntia.doc.gov/osmhome/allochrt.PDF

U.K Frequency Allocations
http://www.onlineconversion.com/downloads/uk_frequency_allocations_chart.pdf

Additionally, several of the posts here and on Slashdot make the mistake of equating higher throughput with greater bandwidth. While each generation of mobile technology has increased throughput, bandwidth (the usable spectrum range) remains a finite and very valuable resource which is leased primarily by the dominant MNOs.

In the United States, bandwidth is usually allocated through a government (FCC) auction process. As more bandwidth is dedicated ("unleashed") for mobile use, the dominant MNOs are in the best position to win the auctions. This is exactly what happened in the 700 MHz auctions held in 2008 (http://en.wikipedia.org/wiki/United_States_2008_wireless_spectrum_auction)

Even with improvements in throughput, consumer demands for new services on intelligent mobile devices will eventually push the limits of allocated bandwidth. What this means is the dominant MNOs have a resource (spectrum allocation) that will become even more valuable over time. What this also means is that consumers will be charged based on their data usage.

71:

Re Packet loss with TCP/IP over ATM.

network FAIL!

Basically, the ATM layer is a completely abstracted transport layer that - so far as the TCP/IP stack is concerned - is a PERFECT WIRE. So you would NOT get a 30x failure rate. What you WOULD get is significantly more latency (hence the issues with RTP).

Posit: TCP/IP packet - split into 30 ATM packets. each ATM packet is routed, and assumed to work. ATM will have an ACK Timeout that will result in packet re-transmit, hence a local buffer requirement between nodes to ensure transmission. With a 0.5% failure rate, you'll lose one ATM packet (dropped) every 200 or so. That increases the latency of the line by some number 'z' (equal to the average transmit time of an ATM Packet). Since you get about 6 TCP/IP packets per 200 ATM, you will see an average latency increase of z/6 (or so). Since the normal latency of a TCP/IP packet is about 30 z, the revised latency is now 30.166 z (approx 0.55% increase in latency)

(Caveat: My network theory classes were more than 20 years ago... should still be substantially true. Abstraction rules, eventually.)

72:

AT&T and Verizon between them had $220 billion in revenue in 2008. Google was at $22 billion. Of course, only Google has intelligent employees; the telcos defend their revenues by hiring idiots.

user-pic
73:

Greg @68: The future mobie is going to be based around a software-controlled radio that can handle pretty much any frequency band thrown at it -- you are already seeing convergence in single-chip devices that can emulate Bluetooth, WiFi and GPS receivers and pretty soon GSM/voice frequencies as well. It's not difficult to make a TX/RX subsystem on a chip that can handle 3-5GHz; after all CPUs have been able to run stably at those sorts of frequencies for a decade or more now. Building a silicon radio stage into a mobile handheld receiver that can handle THz (1000GHz) frequencies is a bit more of a challenge which as far as I know nobody has successfully achieved, and definitely not productionised onto a chip they can roll off the production line at a dollar a shot.

Of course when they do, there's the wonderful Halting State scenario of a crowd of people who all have their phones hacked simultaneously -- given that the phones all have accurate positioning systems and clock timing circuitry it might be feasible to configure all the mobies into a distributed terahertz maser with a couple of kW of collimated output power fired at a single target, a cloud-based assassination tool. Won't that be fun?

user-pic
74:

Charlie: Concerning your comment: Similarly, the iPhone is actually quite an expensive gizmo, if you want to buy one unlocked -- on the order of $600-700!

Apple already sells iPhone-like devices that will make excellent VoIP devices as Apple's API's evolve and improve. These iPod touch units are presumably sold at a profit for much less than $600. They have the same excellent industrial design/build quality, run essentially all the same software with only the the cellphone radio absent (less important details omitted). My point is that Apple seems abundantly ready for a mobile computing world that has evolved beyond its current state.

75:

Steve: take an iPod Touch and add a MyFi and you've got something very interesting, yes.

Robert: I expect your typical mobile phone can't put out more than a couple of watts of power, max. In which case, you'll need hundreds to thousands of hacked phones in a smallish space to turn them into a useful phased-array death ray.

But what if you could hack all the demo phones in a phone shop ...?

76:

Arun @72: Verizon and AT&T don't employ idiots. What they do have a problem with is corporate group-think -- to at least the same degree as IBM circa 1990, and for much the same reason.

user-pic
77:

Robert@73:

given that the phones all have accurate positioning systems and clock timing circuitry it might be feasible to configure all the mobies into a distributed terahertz maser with a couple of kW of collimated output power fired at a single target, a cloud-based assassination tool. Won't that be fun?

and:

Charlie@75:

I expect your typical mobile phone can't put out more than a couple of watts of power, max. In which case, you'll need hundreds to thousands of hacked phones in a smallish space to turn them into a useful phased-array death ray.

This sounds like an updated version of the Clarke(?) story where military soccer fans coordinated shining their reflective programs onto a corrupt referee, causing him to go up in a cloud of greasy smoke. IIRC the consensus is that this would actually be possible, but would have to be done quickly with a lot of people because of atmospheric defocusing. Of course, a reflective program guide is probably putting out several hundred watts instead of just a few.

Somebody really ought to do an updated version of "Tales from the White Hart". I certainly haven't seen any stories of this type for a while. Pointers, anyone?

78:

Charlie @ 10: I'm a paranoid and I hate the idea of some cloud-based service owning my data.

I'm right there with you. Fortune favors the paranoid.

79:

Excellent take on this. I have been wanting to say that but you totally put the words to my thoughts. I just hope we'll be able to get our telcos to do that in the third-world.

80:

IEEE Spectrum has a fascinating article on mechanical radio. Apparently, the problem with software-defined radio (SDR) is the usual hackerware one - it does loads of things acceptably, but nothing superbly, and it burns a lot of power doing them. Micro- verging on nano-mechanical parts don't pull anything from the battery and can achieve incredibly high receiver quality factors.

user-pic
81:

Alex @ 80: Software Defined Radio receivers are not the problem with mobiles, it's the transmitters that really suck the juice and cell-based mobiles require regular TX/RX handshakes even when there is no customer traffic on the links.

There are tricks RF-stage designers can play with phased-array antennas to make the TX output directional but it requires all sorts of positioning knowledge for the on-board aiming system to point the virtual antenna at the local cell-tower and keep it locked on -- a mobie in a pocket is moving in three dimensions and undergoing roll/pitch/yaw pretty much all the time. It doesn't save that much power given the CPU and sensor load required to keep the aim good enough to reduce the TX energy by even 6dB.

Of course phased-array TX stages in mobies would work quite nicely as a cloud/crowd-based Scorpion Stare Mk II which does not require line-of-sight to operate.

I hereby announce the creation of a new word: croud, referring to cloud-based computing resources based on physically adjacent mobile computer systems networked by wireless links.

82:

In the old days, phone companies sold communication, and they continue to do so nowadays. If communication means voice services, or data services (or voice over data services) is really irrelevant since in the end they have been selling the same thing for ages....bandwidth.

What we do with that bandwidth has changed over the years, but we have always needed the Telcos and that won't change.

Google wants to sell Google Voice over Voip? Who cares, the customer is still paying for the bandwidth, either for voice calls, or Voip, so for the Telco it should be irrelevant. If it isn't, then they aren't doing their jobs right.

83:

Joao: what you're missing is that voice takes very little bandwidth -- about 2.4kbaud each way. But the tarriffs phone companies soak their customers with are denominated in call setup and connection time (how many cents per minute does it cost you to place a mobile call). They have enough bandwidth to effectively give voice calls away for free -- too cheap to meter -- but they don't want to do that: they'd lose money.

Look at the data cost per megabyte for SMS messaging. It's a complete rip-off if you evaluate it in those terms: but they still charge 7.5p/text (or about 30p/kilobyte, in effect) despite roaming data being three orders of magnitude cheaper.

The reason they're not making out like bandits is because to get this high bandwidth OTA capacity they had to build out lots of expensive infrastructure. If voice charges then switch to VoIP from the current by-the-second model, they'll lose revenue. If text messaging is replaced by twitter, they'll lose revenue. And the revenue is going into servicing the debt they took on to build the infrastructure ...

84:

Charlie: I know, but in the end, the mobile providers just have to change their rates to match the customer usage profiles.

Maybe in the States the providers are going for the flat-rate data plans with unlimited data usage, but in Europe we don't have those, and data plans are relatively expensive, much more expensive that voice calls, which you can get like unlimited voice calls and SMS for 5/10 euro a month (I personally don't even pay voice calls, just data usage).

PS: And to show the control mobile operators have over their networks, in my country Voip traffic is not allowed in usual data plans. Easy isn't it? No Google Voice "panic" needed, since the mobile operators casually say "No Voip allowed", period.

85:

Joao: I'm in the UK -- we tend to track the EU model rather than the US model. However, there's a price war going on over mobile data at present; as long as you don't need to go outside the UK, you can get a lot of data relatively cheap.