Bandwidth usage is soaring, driven by the proliferation of Internet-connected devices.
As the head of a bandwidth assessment group at the IEEE (Institute of Electrical and Electronics Engineers) and past chairman of the IEEE's task force on 40 Gigabit and 100 Gigabit per second Ethernet, John D'Ambrosia is among the people who will help guide the world toward 400 Gigabit and even Terabit per second speeds. But will our capacity to deliver bandwidth keep up with the human race's ability to consume it?
"That's the question that keeps me up at night," said D'Ambrosia, who is also chairman of the Ethernet Alliance industry group and an engineering executive at Dell. "When we were doing the 100 Gigabit project, people were saying as soon as you get 100 Gigabit done, you need to start working on the next speed. We're past that knee of the curve and we're getting into real exponential growth."
An estimated one-third of the world's population is online now, a proportion that is sure to grow. More users, more devices that connect to networks, and more data-heavy services to ride over the pipes are causing a “bandwidth explosion,” D’Ambrosia said. The data reviewed by his IEEE committees over the past few years indicates that bandwidth demand is growing faster than our capacity to deliver it.
But plenty of organizations are at work on the next generation of Internet and networking technologies, and they provide reason for optimism. The data explosion may not become a giant bottleneck thanks to continued research of the kind profiled below, which has already led to big advances in undersea cables, software-defined networking, and the research-oriented Internet 2 network.
How much bandwidth do we need?
Some of the best numbers we have on bandwidth usage come from Cisco's Visual Networking Index, which shows that worldwide IP (Internet protocol) traffic hit 20.2 exabytes per month in 2010, and 242 exabytes per year.
An exabyte is, well, really huge, comprising 1,000 petabytes, while a single petabyte is 1,000 terabytes… and one terabyte is 1,000 gigabytes.
According to Cisco, global IP traffic increased eightfold over the five years leading up to 2010 and will quadruple by 2015, hitting 966 exabytes (nearly one zettabyte) for the full year. That will be the equivalent of all movies ever made crossing IP networks every four minutes.
As more users enter the Internet age, the amount of data gobbled up by the busiest ones increases as well. By 2015, the top one percent of households worldwide are on pace to need one terabyte of data each per month, four times the amount generated by the top one percent in 2010.
Lots of applications are driving this growth, but most notable is video. Video surpassed peer-to-peer file sharing as the largest type of Internet traffic in 2010. It's expected to account for more than 50 percent of consumer Internet traffic by sometime this year. By 2015, on-demand video traffic will be the equivalent of three billion DVDs per month, and one million minutes worth of video will cross global IP networks every second.
It's not all for consumers, either; videoconferencing is “growing at pretty much the fastest rate from a traffic perspective, more than any other business application,” Thomas Barnett, a service provider marketing manager for Cisco, told Ars.
Of course, traffic to smartphones and tablets is also soaring (with carriers trying to restrict usage with monthly data caps). Cisco has found that mobile Internet devices (including laptops) are on the verge of outnumbering the people of Earth, reaching 10 billion by 2016.
Serving up the necessary bandwidth will be a challenge, of course, but it's a challenge that tech companies and research groups alike are racing to beat. One of the key technologies in this bandwidth arms race is also one of the oldest: underwater cables.
With the proliferation of mobile devices, it’s easy to think we’re living in an all-wireless world. But the haphazard jumble of cables in my house proves otherwise, and that’s only the tip of the iceberg when it comes to physical network infrastructure.
“So many people think the Internet is mobile, it’s wireless,” said Alan Mauldin, a research director at telecom market research firm TeleGeography. “Yeah, it’s wireless until it goes to the cell tower or to the WiFi base station. From there it's all physical. There are cables underground, cables in the ocean, that all link together to give us a global Internet. It’s really just the edges of the network where you’re able to see wireless and mobile technologies.”
Mauldin studies trends in undersea cables, and he has good news about the growth in capacity on this front. While the cables running under the world’s oceans don’t address the issue of bringing Internet capacity to far-flung urban regions, they're crucial for carrying traffic between countries and continents.
“We focus on undersea cables because that’s the primary way that international communications happen,” Mauldin said. “Satellites haven’t been a real big part of the picture for intercontinental connectivity in quite some time.”
As you can see in the chart below, international bandwidth availability has soared ("used bandwidth" refers to the capacity deployed by providers, rather than bandwidth consumed by end users). From 1.4 terabits per second in 2002, it steadily climbed to 6.7 terabits in 2006 and has now reached 92.1 terabits per second. TeleGeography expects that number to hit 606.6 terabits per second in 2018 and 1,103.3 terabits per second in 2020.
The terabits per second shown above represent the total international capacity in IP backbones, private networks, research and educational networks, etc. These numbers show the available capacity for data to travel from one country to another, both through undersea cables connecting nations separated by water and by the terrestrial links between countries with land borders. So links from New York to Washington, DC are not counted, while links from New York to Europe, and from one European country to another, are reflected in the data.
Regional connectivity numbers reveal huge disparities. While Europe in 2011 had 49.8 terabits per second of bandwidth available to flow between countries, and the US and Canada had 20.8 terabits per second, Africa had less than a terabit per second—700 gigabits. (These numbers, you may have noticed, add up to a higher total than the worldwide connectivity—that's because of some overlap. For example, trans-Atlantic capacity counts toward both the European and US/Canada totals.)
More undersea cables are being built. Consider one $1.5 billion project to reduce latency between London and Tokyo by 60 milliseconds with what’s described as the “first ever trans-Arctic Ocean submarine fiber optic cables.”
Reducing latency is hugely important for certain applications, like those used in high-frequency stock trading. But that particular cable project, actually, isn’t crucial in the grand scheme of providing greater Internet access to more and more people, Mauldin believes.
“It’s not a huge issue, really, I don't think. Most of the capacity between Asia and Europe now can go across Russia terrestrially anyways, or it can go across the US between those two points,” he said. “And there's already high-capacity systems that serve and provide capacity between Europe and Asia as it is. There’s no lack of capacity.”
Several companies are looking at stretching cables across the Arctic, and this will benefit remote parts of Alaska and Canada, or even research stations near the North Pole, he said. New submarine cables are being deployed off the west coast of Africa, in the Middle East, South America, and from Singapore to Japan to meet regional demand.
Luckily, the cables under the ocean now don’t all need to be replaced in order to provide huge increases in bandwidth capacity.
Of course, construction of new cables won’t stop. But cables that were designed to move data at 10 gigabits per second can now be upgraded to 40 gigabits, and perhaps even 100 gigabits, Mauldin said. Equipment has to be replaced on shore to get the speed boost, but crucially the underseas cables themselves can still be used.
“We're definitely seeing major advances in submarine cable technology that will allow existing cables that have been in service for a decade to have their capacity increased dramatically,” Mauldin said. “That’s one of the biggest changes we’ve seen in the past year or so.”
Software helps define networking
As Mauldin notes, pushing more data to more people isn't just about having more infrastructure. It's also about using it smarter. When it comes to using software to improve networking, little is getting more hype these days than OpenFlow, an implementation of software-defined networking. OpenFlow is being used in data centers by Google. It’s also being examined for the Worldwide LHC Computing Grid, the network that moves the massive amount of data produced by particle collisions at the Large Hadron Collider run by CERN, the European Organization for Nuclear Research.
“We’re able to use it at a small scale,” said Phil DeMar, network architect at Fermilab in Illinois. Fermilab is a “Tier 1” site on the LHC network, meaning it’s one of the first 11 research labs in the world to receive CERN data. Data then moves to about 160 Tier 2 sites and on to many Tier 3 sites. OpenFlow makes the movement of scientific data more efficient by dynamically allocating network resources without slowing down the general purpose Internet traffic that ordinary users rely upon, DeMar said.
While OpenFlow helps move data from CERN to the Tier 1 sites, it hasn’t yet scaled across the entire LHC grid, DeMar said. OpenFlow alone won’t be enough to maximize network efficiency, but DeMar says it’s a good start.
“In terms of just getting bandwidth, it's a question of economics. How much can you afford to do?” DeMar said. Fermilab has two 10Gbps connections to CERN, and another two connecting Fermilab to Tier 2 sites. But it turns out “it’s a bigger challenge to be able to move data at that rate using the layers of software that have to exist than it is to have to provision 10 to 20 or 30 gigabits, whatever you need, underneath it,” he said. “It’s more a challenge of software, and middleware, actually.”
The second Internet
OK, there is no "second Internet," but there is an "Internet2." This is a networking consortium composed of hundreds of universities, government agencies, labs, and research and education networks. Internet2 has been building out its network infrastructure since 1998. Now in its fourth iteration, Internet2 boasts "the first transcontinental 100 Gigabit per second network in the world."
Internet2's goal is to be roughly one generation ahead of what's available in commercial Internet networks, said Rob Vietzke, the consortium’s VP of network services.
Vietzke thinks OpenFlow will be critical for the next generation of the Internet because, he says, the technology lets techies program and configure the network in the same way they can program any other piece of hardware. "In every discipline of computer science right now, except for networking, you can program your hardware," he said.
There are still scientists shipping disk drives across the country for lack of high-speed network access, Vietzke said. In a future where "bandwidth is no longer a restriction or a constraint," Vietzke hopes to see all kinds of innovations—perhaps entirely new security models, or newfangled visualizations of scientific data drawing from distributed databases.
The key question, of course, is whether these bandwidth innovations—along with many others—can exceed demand over the next decade.
Andrew Odlyzko started tracking bandwidth in 1997 at AT&T Labs, and continued tracking Internet traffic as a professor at the University of Minnesota, setting up Minnesota Internet Traffic Studies(MINTS).
Odlyzko is wary of making predictions about bandwidth because, he says, “I sort of got burned back in the late '90s” when he forecast a doubling of traffic every year and growth ended up slowing. Still, he wouldn’t be surprised if Cisco’s projections turn out to be too low—Cisco is predicting 32 percent compound annual growth in total worldwide IP traffic through 2015.
“I’m skeptical because when I look at the growth of computing power and in storage, those are still doubling each year,” Odlyzko said. (Moore’s law predicts a doubling every 18 months to two years.) “I can see potential sources of such traffic.”
But he also sees plenty of unused current capacity, although it’s hard to measure. That’s in part because of “dark fiber,” cables that have been installed but not yet “lit” or activated by a network provider.
Everyone we spoke with had real reasons for optimism about the ability of all these solutions to collectively provide the bandwidth we need. “Networking technology seems to be staying ahead of the requirements," said Fermilab's DeMar. "Certainly the networking requirements for large-scale science are increasing, but similarly we’re getting an evolution in network technologies.”
TeleGeography's Mauldin notes that even current technology should be able to keep up with undersea cable demand for the next few years.
“The same 10 Gigabit technology that was developed a while back has served us well,” Mauldin said. “Just to be clear: the cables in service now, they are nowhere near having their capacity exhausted. It’s not like there's a shortage happening. It’s just that going to 40 and 100 gigabits is going to be more favorable, it’s going to help meet demand in the future and it will also help to lower costs. That’s the whole key here as to why bandwidth demand is able to keep soaring. The cost of bandwidth on a per-unit basis keeps going down every year.”
And John D'Ambrosia is looking even further down the road. While Gigabit Ethernet is still widely used, 10 Gigabit and faster products are starting to make headway. And even faster 40 and 100 Gigabit Ethernet standards were ratified by the IEEE in June 2010, though D’Ambrosia is already relishing the technical debate that will take place around moving forward to 400 Gigabit Ethernet or even Terabit Ethernet.
“If we look at this from a technology perspective, you will have a lot of people pointing to 400 Gigabit, because there are ways of making a solution that are believed to be in our reach," he said. "When you start talking about Terabit, it’s not as clean. There are very wide interfaces, both electrically and optically, that for Ethernet links are going to be problematic. I think that’s going to be an industry debate.”
But that's the sort of debate he enjoys having—and for the last few decades, the engineers who engage in such work have kept ahead of the looming bandwidth monster. Here's hoping that all their innovations give us another decade of big bandwidth. Now if we could just get more of that core capacity to home and business users at the network's edge...