Eric Schmidt, CEO of Google, is wrong, again...

Google.jpg

In Mr Schmidt’s article in The World in 2007, he talks about the concept of “cloud computing” where by all applications are on-line and there is no local processing of data. In the “cloud computing” world, all applications are web-based (at www.google.com/.office or similar) so there is no need for heavy duty processing power at the locality of the user.

 

This is not a new idea; Larry Ellison of Oracle (and anti-Microsoft zealot) has been preaching this for a couple of decades. Both Schmidt and Elisson are wrong for some pretty straight forward reasons, and I’m constantly amazed at how much credence is given to this idea of “cloud computing”.

The application has to suit the nature of the task in hand - for example the human ear doesn’t deal with variable or unpredictable delay in the delivery of sound and finds it disruptive to the point of rendering a call exhibiting these characteristics as unintelligible, which is why many VOIP calls over the internet with applications such as Skype are often unsatisfactory. The internet has no ability to deliver any degree of guaranteed Quality of Service (QoS) and although its resilience is high, its actual performance is poor.

Of course voice over internet calls are free and people may well accept the compromise - voice over the internet works well enough for most people most of the time, but overall doesn’t compare with the Plain Old Telephone Service (POTs) of old.

“Cloud computing” requires users to be on-line 100% of the time - and that’s just not possible. Right now I’m writing this on a laptop in a lovely hotel (Miller Howe Hotel) overlooking Windermere in the English Lake District, one of my favourite places on the planet. The views are magnificent, but I can’t get on to the internet as there is no access, other than bluetooth to phone/modem, which would be both expensive and slow.

When we travelled here by car the same - internet access for any meaningful task was just not possible. For sure I could have used a GSM data card, but the bandwidth offered just doesn’t compare to a fixed line.

So local processing and therefore local applications will be a requirement for many years to come and some applications just don’t lend themselves to “cloud computing” or “net computing” anyway and will always require local processing capability.

One such example is the post processing of video. For this to work in the “cloud computing” paradigm, I have to replicate what happens now locally, but over a Wide Area Network (a WAN). First I have to connect the DV camcorders we use to..what? Currently I connect to my Apple Mac quad core G5 - what would I use in the cloud computing model? Let’s run with the G5 then, as there is no current alternative. The G5 becomes a local node that sends the content of DV tape to a post-production suite on-line, in the cloud at google.com/edit, perhaps.

All well and good - what’s the problem? According to Wikipedia.com “DV uses DCT intraframe compression at a fixed bitrate of 25 megabits per second (25.146 Mbps), which, when added to the sound data (1.536 Mbps), the subcode data, error detection, and error correction (approx 8.7 Mbps) amounts in all to roughly 36 megabits per second (approx 35.382 Mbps) or one Gigabyte every four minutes”.

So a full hour of DV video will hold 15 Gigabytes (15,000,000,000 bytes) of data. Assuming DV uses 8 bit byte technology, this means that there are 120,000,000,000 bits of data that need to be sent across the network. Note that data volume is measured in Bytes and data transfer speed or bandwidth is measured in in bits/second.

So-called “broadband” based on ADSL provides at best 1Mbps upstream into the internet - that’s 1,000,000 bits/second. So to transfer our 120,000,000,000 bits from one tape at 1,000,000 bit/s we’ll need 120,000 seconds or 2000 minutes, or 33.33 hours or 1.38 days….(numbers are rounded for convenience).

I guess you can see where this is going….

The networks just can’t deliver the bandwidth or performance at a realistic price to make this anything other than a pipe dream.

However, the bandwidth in my computer is almost immeasurably cheap and operates sufficiently fast, together with a Firewire 2 (800Mbps) data transfer technology to make transferring video from tape to Final Cut Pro, the editing package of choice, a “real time’ affair - an hour on tape takes an hour to transfer.

I have long said that you can never have enough bandwidth, just as in principle you can never have a fast enough processor, too much RAM or too big a disk.

The network may well be a useful adjunct to the computer - but it won’t become the computer for a very long time - not until the bottleneck of copper in the local loop is removed. And even then it’s a doubtful outcome.

Where ADSL provides an 8Mbps service downstream, ADSL2+ delivers 24Mbps downstream and 3Mbps upstream, but it’s still contended (shared) and it’s still asymmetric and still subject to the basic laws of physics with signal degradation over distance.

It’s contended because it’s a shared service at the DLSLAM - this means that the bandwidth the service provider is using on the network side of the DLSAM is shared between the number of users on the customer side. If 100 customers each use the full ADSL 8Mbps bandwidth, then for the service provider to offer an un-shared or un-contended service, they have to have, on the network side of the DSLAM 100×8Mbps backhaul circuit. That’s nearly a full gigabit/second which is Wide Area Networking terms is massive. And expensive.

If the service provider uses the standard domestic contention ration of 50:1, then the service provider will need a mere (100/50)x8Mbps= 16Mbps - which would fit very conveniently down a regular 34Mbps SDH circuit. Much cheaper, obviously, but notice that the effective bandwidth available is 8/50= 0.16Mbps! A bit of a bummer if you thought you were paying for an 8Mbps service! Of course the upload 1Mbps bandwidth is also 1/50 which is why it can take an age to upload a video to Youtube - and 1/50th of 1Mbps is 20Kbps - less than that delivered by the now defunct dial up!

The service providers can get away with this because they rely on the statistical probability that not all 100 users will want to transfer large amounts of data at the same time. But with user-generated content, the risk of this happening increases and the ADSL model shows its fundamental weaknesses.

The Napsterisation of the web meant that ADSL-based services were stillborn - launched after the market demand had changed. ADSL was designed in an age when the perceived use of the internet was that most people would download more than they would upload. The concept of music sharing services such as Napster (let alone Youtube, where everybody publishes in video) just wasn’t imagined (so quite where all the content was meant to come from I’m not sure).

For “cloud computing” to be anything other than a pipedream, then telecom has to deliver a number of things:

  • Massive bandwidth to the home at greatly reduced prices (which if they did would mean their “wholesale” industry would collapse, so forget this one!)
  • Data services to the home with Guaranteed QoS
  • Services with sub-millisecond latency (delay) or better

In fact, Schmidt is replicating what telecoms has been doing for the last 20 years - trying to sell network services as a cloud, rather than selling the benefits of the service. The network is a cloud vs the computer is a cloud - it’s the same argument with no clarity or explanation of how it is meant to be achieved.

Bamboozling the customer with techno-nerd-nonsense or even trying to hide the technology - the ‘how it works” - from the consumer will result in you having no credibility. “Never mind how it works, it’s a cloud” - but how do you explain the Service Level Agreement (SLAs) without drilling into the details of the network?

In any case, ADSL itself has no SLA - no guaranteed Quality of Service (QoS) - certainly whilst it’s a contended service it can’t have just as there are no guarantees that you’ll get the hotel room you’ve booked or the aircraft seat you’ve booked.

Those industries, travel and hospitality, work on the principle of “over booking” - and will happily upgrade you to business class or put you in a hotel, or a different hotel - whatever it takes as they know that most of the time the statistical likelihood of a double booking won’t materialise because things happen and we the consumer don’t always turn up.

Telecoms has been doing the same thing for years with data services - frame relay and ATM work in the same way. As does ADSL - the model works in favour of the telecoms operators as, they reason, it is unlikely that all 50 people sharing a connection will want to upload vast quantities of tall at the same time. At least that was the assumption when ADSL was designed. Youtube and the rise of “user generated content” may cause them to revise that model though and place arbitrary limits to data volumes.

This is happening now - some of the cheaper services have volume limits and once those have been exceeded, then the service slows unless a premium is paid. Good for the service provider in the short term, but bad for the user and more nails on the coffin for Schmidt’s argument.

For the network to really become the computer, even for more mundane applications such as “Office apps”, then telecoms has to deliver a service that’s at least as good as that offered by my local backplane, with detailed explanation of the metrics, the measuring points, the statistics - in other words I want to know exactly what I’m getting for my money.

The only way that this can be delivered is for telecoms, not Google, to churn the To-The-Home (TTH) infrastructure. The have to wean themselves off the copper that served them so well for decades and ideally bring fibre to the home, delivering Gigabit/second speeds, using Ethernet as the layer 1 and layer 2 technology (see previous blog Eric Schmidt CEO of Google is wrong).

Until then Mr Schmidt has his head in the clouds about network computing. Google is just an application, telecoms gets you to the application. The faster telecoms can do this the more value they can add.