Is the Internet Ready to Break?

By Edward Cone  |  Posted 04-04-2007

Is the Internet Ready to Break?

Predictions of the Internet's imminent demise have been around almost as long as public interest in the net itself. But lately there's been something of a bull market in doom and gloom. Is it time to panic? Probably not.

According to a much-discussed report from the Technology, Media & Telecommunications (TMT) group at Deloitte Touche Tohmatsu, the rapid rise of Web video and broadband net access "may overwhelm some of the Internet's backbones" in 2007, while "ISPs may struggle to keep pace with demand." The report notes that daily traffic at the Amsterdam Internet Exchange, one of the largest of the major hubs connecting different IP networks on the net, will reach two petabytes by October, almost double the usage in February 2006; traffic at the Amsterdam exchange (known as AMS-IX, or "Amsix") for all of 2007 is expected to reach one exabyte, "equivalent to 500 times the data stored in all U.S. research libraries."

Meanwhile, PBS personality Mark Stephens, a.k.a. Robert X. Cringley, oraculates that this will be remembered as the year "the net crashed (in the U.S.A). Video overwhelms the net and we all learn that the broadband ISPs have been selling us something they can't really deliver." And the influential technology Web site Ars Technica asks if service providers will have to "throttle" their networks to handle video-driven demand.

An Internet that is broken or seriously impaired at its core, disrupting the flow of information around the world, would obviously be bad for business in all kinds of ways. Problems with access to the net, which might not affect large businesses with dedicated access of their own, would be an issue for companies that rely on electronic commerce and other routine interchange with customers.

But as in the case of the most notorious prognostication of impending disaster, made in 1995 by Ethernet co-inventor Robert Metcalfe, the doom seers seem likely to eat their words. (Metcalfe famously did so quite literally, putting the column in which he made the prediction into a blender and consuming it in front of a conference audience.)

In fact, the supply of available bandwidth, especially at the core of the net, looks healthier than the pessimists would have it—or even bother to support with hard numbers when pressed to defend their arguments.

Deloitte TMT, which titled a key section of its report, "Reaching the Limits of Cyberspace," was unwilling or unable to provide detailed back-up for its claims. Said a Deloitte spokeswoman by e-mail, "We don't have more data that can be shared on questions re: Internet capacity."

Henk Steenman, chief technology officer at AMS-IX, does have more data. He states flatly that the key European hub "will definitely have no problem with capacity for 2007 or 2008. We've seen 100 percent increases in traffic each year since 1997, and coped with it. A hundred percent a year is nothing special, and I've seen no indications it will grow faster than that." Other hubs should be able to handle rapid growth, too, says Ken Cheng, vice president and general manager of the High Value Systems business unit at Foundry Networks Inc. in Santa Clara, Calif., which sells heavy-duty switching and routing equipment. "It's the same at Internet exchange points on every continent. I'm certain they will be able to handle the load," Cheng says.

Eric Schoonover, a senior analyst with Washington, D.C.-based market research firm TeleGeography, agrees. "There's nothing all that alarming going on," he says. "This whole idea that the increase in traffic is going to break something or kill something, or the providers won't keep up, seems foolhardy to me." Video traffic and demand growth have been accounted for, he says, and "the network operators know how to scale." TeleGeography research shows average global utilization of core Internet capacity in mid-2006 was only 34 percent, with peak utilization of 47 percent of available capacity.

And capacity at the core of the Internet continues to increase, says Google Inc. vice president Vint Cerf, a key figure in the development of the Internet. "There is available fiber and more wavelengths per fiber, so I do not see this as a serious threat," he says. It is true that traffic growth is faster than capacity growth—average traffic across the net increased 75 percent last year, while capacity grew 47 percent, according to TeleGeography—so the long-term trend needs to be addressed. But, says Cerf, a near-term capacity problem "will be at the access edges to the net, and not in the core." In other words, the traffic jams would be more likely at the points where people connect to the Internet via their service providers, rather than at the core of the net itself.

But even that "last mile" to homes looks reasonably healthy to the people in charge of it. "I don't see anything specific in the way of capacity problems today, and my job is to manage capacity and growth in our network," says Greg Collins, director of network and data center engineering for Earthlink Inc., the third-largest Internet service provider in the U.S.

Reports of severe problems at the core of the Internet and beyond in 2007 seem exaggerated. The more immediate impact of higher traffic on business users may be a flattening of the downward price curve they have so long enjoyed for telecom services—down nearly 60 percent over the last seven years—or even an uptick in cost. "My prediction is that in the industrialized world there may be increases in the price of bandwidth this year, and there will likely be increases in bandwidth costs for the next five years," says Tony Kern, deputy managing partner of Deloitte's U.S. TMT practice. (Kern did not work directly on reporting the document predicting capacity issues.) TeleGeography's Schoonover is slightly more sanguine. "I don't think corporate users have a lot to worry about," he says. "Price decreases may end, but I don't see increases this year."

The State of the


The State of the Internet

The Internet is a network of networks; at some point, generalizations about it break down. "We connect 260 different networks, some managed well and some managed less well," says Steenman. "Sometimes problems will be attributed to 'the Internet' that belong to a specific network." Connecting all these networks is an infrastructure of fiber-optic cables and major hubs such as AMS-IX; it is this core that Steenman and others regard as healthy into the near future.

Part of that confidence is bred by the design of the net itself, which was famously conceived to survive disaster by routing packets of information by the best available path; thus, overburdening a particular piece of the net doesn't clog the whole thing. This resiliency was demonstrated in the aftermath of the December 2006 earthquake in Taiwan, which knocked out of commission all but one of the eight submarine cables carrying telecom traffic from around the world to southern Asia. Internet service suffered, but did not go down completely, as traffic was rerouted across landlines and satellites.

Earthquakes aside, there is a lot of fiber-optic cable out there. The billions of dollars spent on laying so much of it helped pop the Internet bubble, and a lot of it stayed dark for years. But it's still there, and its carrying capacity won't be used up for quite a while. Part of that is due to improved technology: With huge increases in the ability to utilize the spectrum in recent years, some fiber is being used at only 1/100th of its potential capacity, according to TeleGeography's Schoonover. Given the ability to upgrade the performance of older fiber, in large part by updating the boxes—the broad range of hubs, routers, repeaters, lasers, multiplexers and so on—at either end of it, says Schoonover, "it's a stretch to say that even 5 percent of the potential capacity of long-haul fiber is being used."

That's not to say all this juiced-up fiber capacity is readily available. But the technology to enable it to handle projected near-to-mid-term growth is ready to deploy. Updating the hardware to utilize more of the spectrum and move to a new Ethernet standard won't be cheap. "The economics have to work out to replace opto-electronics to add capacity," says Cerf. But Steenman says AMS-IX, a not-for-profit, has the pricing power with its customers to support the investment it needs. In the longer term, the economics of upgrading are the subject of debate.

Says Schoonover: "If the demand is there, companies are going to be able to recover the upgrade costs in the sale of their capacity. We're talking about lighting new fiber, or adding wavelengths to existing lit fiber. Both are relatively cheap compared to the construction costs of putting in new fiber. [The burden of upgrading] is not the end of the world."

At the access layer, Earthlink's Collins says corporate IT, which tends to purchase its own direct connections to the Internet, should not see problems. For consumers, faster access for the last mile is on the way, with improved DSL and other broadband-service performance, and better image compression. Access providers for smaller and less-thickly-populated residential markets could feel a squeeze, he says, as they may lack the user density to make upgrades pay off in a reasonable period. Even there, given the number of popular applications available to consumers, says Collins, "migration from broadband to super-broadband will not take as long as the move from dial-up to broadband. We don't really control that last mile, the consumer does; where the consumer demands it, the market will deliver it. There's a challenge, but we're not standing around wringing our hands."

If streaming video becomes hugely popular, the potential for delays is real, says Cerf. But he's not convinced that consumers are in a hurry to start watching lots of streaming video in real time on their PCs and other devices, rather than downloading video to watch at their leisure. "I would point out that watching streaming video over the Internet is much less satisfying than downloading video and playing it back, as an iPod is for audio," he says. "Downloading is easier on the network. You don't have to assure every packet arrives precisely in order, and on time."

Building a Better Internet

Building a Better Internet

If the sky isn't falling in 2007, traffic growth and overall capacity over time remain very real issues. For the next few years, the big Internet hubs can work with existing equipment, but upgrades must come relatively soon. Foundry's Cheng says the company will have 10-gig equipment (capable of moving information at 10 gigabits per second) that will "double the capacity of today's leading systems" ready to ship in May; he expects it to be live on the Internet by the third quarter of this year. He says 100-gig solutions will be on the market "way before 2010."

Says Steenman, speaking by phone from a meeting in Florida convened to work on an upgraded Ethernet standard: "We need new technology for the demand we expect in 2009, improved switches or next-generation 100-gigabit Ethernet." He's confident the new standard will be available when it's needed, but he acknowledges there could be problems if it isn't. "We're still in the study-group phase," he says. "The process needs to go a lot faster."

But raw speed and beefed-up carrying capacity aren't the only improvements under discussion. One solution lies in an overlay of distributed servers at the edge of different ISP networks, the method pioneered by Akamai Technologies Inc. Akamai has servers in more than 3,000 locations, in 750 cities, to boost speed and reliability over the last mile. These servers contain proprietary software that maps the net to optimize the flow of content and applications. "The Internet won't have the capacity to distribute new media in a traditional way from a central location," says Akamai chief scientist and cofounder Tom Leighton. "The Internet can be made to work, to put media into the home, and the Internet itself doesn't have to change at all." Along the same lines, Cringley has written that Google's massive data-center build-out could be part of a strategy to offer peering arrangements with ISPs, making Google "a huge proxy server for the Internet."

Another possible game-changer: a smarter Internet that differentiates between packets. "We need intelligence," says Earthlink's Collins. "Differentiated traffic has never been necessary before, and there are other technologies in the wings, such as broadband over power lines, that could become cost efficient with enough consumer demand. But intelligence is one way of meeting the looming concerns." The idea is that not all packets need to travel at the same speed to deliver optimal service.

"Some packets are more valuable than others," says Dave Caputo, chief executive of network equipment-maker Sandvine Inc. "Bandwidth has three dimensions: speed; latency, or the time lag between each packet; and jitter, the predictability of the order of the packets. It doesn't take much bandwidth to have a good phone call. VoIP is not bandwidth intensive, but it is jitter- and latency-sensitive. Nobody cares about waiting an extra 200 milliseconds for e mail, but it makes a phone call or game useless; interactive applications are more time-sensitive than non-interactive ones. A game player needs less jitter, but P2P traffic can move just a little slower without bothering people."

Prioritizing applications that are latency- and jitter-sensitive "will solve a lot of problems for enterprises and Internet users in the home," says Caputo. Without that kind of network intelligence, he foresees "a tragedy of the commons for the Internet, with bully applications taking more than their fair share, and less bandwidth-intensive apps like VoIP and gaming losing out."

Such differentiation doesn't have to impinge on "network neutrality," the hot-button issue that deals with whether or not there should be differentiated prices for similar services. (See "Scare Stories," page 40.) Cerf, a vocal proponent of network neutrality, which keeps service providers from prioritizing traffic from preferred users, e.g., those who pay more for enhanced service, is fine with prioritized traffic as long as it doesn't discriminate between providers of similar services.

With proper investment and better management, adds Sandvine's Caputo, "I feel very confident the Internet will outlive us all."