Latency is the new bandwidth. It’s the scarce resource when you need really, really high performance in your computing environment. Who needs such performance? The big driver has been financial companies that trade on the stock and commodity exchanges. That’s why Chicago, as a major financial center, is being targeted for the AboveNet buildout. New York is another hotbed of activity, with colocation near financial exchanges in high demand. Since we are in a global trading economy, low latency connections to Europe and Asia are also enjoying enormous growth.
Why the need for speed, and why doesn’t more bandwidth solve this problem? Latency and bandwidth are two different animals. Latency is how long it takes a packet to get from point A to point B if there is no interference from other traffic. Bandwidth is the sheer volume of traffic you can handle before packets start piling up at the choke points.
To someone with a too-small WAN network, it might seem that latency and bandwidth are the same thing. That’s because a bandwidth-limited system slows down transmission a lot more than the effect of latency. But once you have enough bandwidth to keep up with the amount of traffic you can actually send and receive, then the network is going as fast as it can. Increasing bandwidth won’t make those packets fly down the line any faster. They’re inherently limited by the speed of light and electrical delays within the circuitry.
You get a dramatic feel for the effect of latency when you watch those television interviews from locations thousands of miles away. They connect to the studio using microwave trucks that beam the signal to a geostationary satellite 22,236 miles above the equator. The signal goes from the truck 22,236 miles up to the satellite and then comes back down 22,236 miles to the television studio. That’s a quarter of a second delay one way or about half a second for a conversation at a minimum. You find it either comical or annoying to listen to the reporters at both locations tripping on each other.
How do you reduce that latency? Take as much equipment out of the path as possible and make the path as straight a line connection as possible. Not much you can do with a geosynchronous satellite. They only work at that altitude. Just don’t try to use one for VoIP telephony or any sort of interactive process or you’ll get frustrated quickly. Fiber optic cables are the high bandwidth connection of choice, but even they are not created equal. Some networks, like the Internet, may take a circuitous route to get packets from one location to another. The Internet was designed to get the packets delivered even under multiple fault conditions. It wasn’t designed to get them delivered particularly fast.
What’s better? Privately run fiber optic networks, like national and international MPLS networks, do a decent job. Even these are designed for normal business requirements and aren’t optimized for minimal latency. What you need are networks specifically designed to minimize latency. They feature very straight runs from location to location, very few switches or routers on the path, and termination as close to the users as possible.
High frequency trading has highlighted the need for low latency networks, but as business moves more and more into the cloud, other processes will drive their own needs for this level of performance. Disk mirroring and data replication are two applications that already benefit from lower latency connections. Cloud computing over multiple locations could easily have the same requirement.
Are your business processes latency sensitive? If so, you should seriously consider the newer low latency network services offered by AboveNet and other competitive carriers. There are microseconds, even milliseconds to be saved.