Showing posts with label network latency. Show all posts
Showing posts with label network latency. Show all posts

Monday, February 09, 2015

The Cloud Needs Low Latency

By: John Shepler

It makes so much economic sense. Instead of getting a bank loan and making a major capital investment in your own server farm and data center, you simply contract with a cloud service provider and pay only for what you use when you use it. No more having to predict how business will be next year or even next quarter. You can roll with the punches with on-demand cloud services.

Cloud Computing Data Mouse Pad. Get one for yourself now.It Seems So Easy
The solution is deceptively simple. Clean out your current data center or don’t build one in the first place. Rent what you need from one of a myriad of cloud service providers. All you need is a simple link from your facilities to theirs and nobody on the network will know the difference. After all, how can they tell whether the servers are down the hall or across the country? Few users do anything but run the applications anyway.

The Hesitating Network
There’s an old saying that goes something like this: “On the network, the printer on the other side of the country is as close as the one in the next office.”

That’s an ideal. In reality it can either seem just that close or really a thousand miles away. The server three states over works the same as the one in the basement used to. Somehow, though, it’s developed a hesitation. You have to wait for the system to respond now. Worse, yet, that hesitation seems to vary. You really don’t know how the system is going to perform from one day to the next.

The Infinite Cloud That Isn’t
One used to be able to blame system performance lags on lack of resources. When everyone is running big jobs at the same time things slow down a bit. If you were clever, you tried to get your work done at odd hours before the thundering herd got to work or when they went to lunch. Let somebody else deal with the congestion. It’s the equivalent of drive-time traffic and no more fun.

That was one thing the cloud was supposed to fix. A major benefit of cloud data centers is that they have massive resources. Any given tenant can expand or contract the number of servers and the amount of storage they are using in real time. You aren’t supposed to run out of capacity ever. Yet, it seems like the cloud has less capability than you had before. How can that be possible?

The WAN Gotcha
One big thing that is often forgotten is that the printer in the next office and the one across the country have to be connected by something. That something is your WAN network connection. When all of your network resources were sitting on the LAN, the point was moot. There was plenty of capacity and the links were short enough that network performance just wasn’t an issue. You might say that the network became “transparent.”

Now, your network includes both the sporty LAN and metro and wide area connections that can be anything but sporty. In fact, they can be downright sluggish. That’s because there’s a difference between your local networks and the ones that run by telecom carriers. Local networks can be fairly easily engineered to have enough performance to appear transparent. That’s both harder and more expensive to do over long distances.

What’s Holding Back the WAN?
There are a few technical characteristics that spell the difference between transparent and not so transparent connections. These include bandwidth, latency, jitter and packet loss. You want to maximize the first and minimize the other three.

It’s tempting to jump to the conclusion that sluggish network performance is a result of too little capacity. When you start to use most of the available bandwidth most of the time, you can have periods of congestion where there are just too many packets to send down the line at the same time. Easy solution? Increase bandwidth. A double, triple or 10x bandwidth increase should solve the problem immediately… or will it?

One issue that no bandwidth increase can fix is latency. This is a time delay for packets to traverse the network. We think of electrical transmission being instantaneous from source to destination. At the local level, that’s a really good assumption. Once you leave the premises, though, even the speed of light may not be fast enough.

Minimizing Latency
Clearly, one way to minimize the time delay of transmission is to minimize the distance involved. Even at light speed, you can only go 186 miles in a millisecond. That’s 10 mSec for 1,860 miles. In fact, even that isn’t really possible. In real world glass fiber, you may be looking at more like 15 to 20 milliseconds one way or at least 40 milliseconds round trip. If you happen to be using a geosynchronous satellite link for part of the trip, that expands to a quarter to half a second.

Most applications aren’t going to be impacted by a 40+ millisecond delay, although half a second is definitely going to be noticeable and perhaps difficult to live with. This argues for using a terrestrial fiber link to the cloud (which is actually on the ground) and using the shortest path possible. A point to point dedicated private line is most likely the highest performance you can achieve. You may be able to get equivalent or nearly equivalent performance through a privately run MPLS network at a lower cost.

How About The Internet
Oh, yes, the Internet. It sounds like the ideal resource. It goes everywhere and connects everyone. The cost is amortized over so many users that the Internet is going to be your low cost solution. Security is certainly an issue, but encryption can create tunnels that give you a virtually private network that emulates a truly private network.

That emulation may fall short, not for security reasons, but in performance. The Internet is not engineered to minimize latency or jitter. It’s architecture is designed for resilience. If you lose a connection between servers, the network will automatically reconfigure to heal itself and keep the traffic moving. Unfortunately, that means your packets may take different paths on different trips. They may also encounter bottlenecks if particular nodes get overloaded.

Does this mean you can’t use the Internet to connect to the cloud? For maximum performance or highly sensitive interactive and real-time applications, you’ll do better with private lines. Otherwise, you might be satisfied with a private/public hybrid called DIA or Dedicated Internet Access. You still share the high performance backbone of the Internet core, but you connect via a private line that minimizes latency, jitter and packet loss during that critical first mile.

Finding a Better Cloud Connection
There is a range of cloud connectivity options available for most business locations. You should look at the cost/performance tradeoffs involved with each of these and then choose the link you need to make your connection to the cloud as transparent as you need.

Click to check pricing and features or get support from a Telarus product specialist.



Follow Telexplainer on Twitter

Friday, March 09, 2012

Transforming the WAN Into The Smart WAN

We often think of WAN connections as dumb pipes. Perhaps even the “series of tubes” that Senator Ted Stevens envisioned as a model for the Internet. Data goes in one end and comes out the other. Does it need to be any more sophisticated than that?

These days it truly does. The move from predominately in-house data centers and simple file transfers over the LAN to cloud hosted systems, VoIP telephony and video content has changed what we need from the WAN. Level 3 calls this the Smart WAN. This short animated video illustrates the difference...



Clearly, if you’re sitting in line at the bank drive-through, watching people load their transactions into those plastic carriers and shooting them through the vacuum tubes to the tellers, and start thinking, “Hmmm. That’s not a bad model for my WAN network,” you could be in serious need of a Smart WAN upgrade.

It all starts with a smart pipe to replace dumb pipe technology. Instead of just cramming everything down the same conduit without regard to content, you need put some method into the madness. That means CoS or Class of Service. CoS recognizes that some packets are more sensitive than others to vagaries of the network. Most sensitive is anything operating two-way in real time. VoIP telephony will fail when it encounters the least blockage or delay in the WAN. Video conferencing has a similar sensitivity to network characteristics.

What CoS does is assign different classes or priorities to different packet streams. Brute force file transfers go to the bottom of the list. It is important that the files get from point A to point B intact, but a little delay here and there won’t make any difference. The same is true for email and most messaging. Interactive business applications are a step higher. If you are interacting with SaaS in the cloud and expect it be as responsive as if it were running on the servers down the hall, your network has to make these interactions transparent. Bandwidth constrictions and latency can raise employee frustrations to a fever pitch, if not kill productivity completely.

Enterprise VoIP on converged IP networks offers major cost reductions and added productivity features for businesses that can make it work. The Hosted PBX model adds another layer of sensitivity because the WAN gets involved in handling voice traffic as well as data. Latency, jitter and packet loss wreak havoc with voice communications. Just because convergence works on your LAN doesn’t mean that sending the same traffic down a dumb pipe will give an equally good result.

Now consider having the same set of services expecting the same quality for multiple business locations, including headquarters, branch offices, retail locations, warehouses and factories. Clearly, some intelligence needs to be injected to the WAN network to ensure that every location has the same connectivity and quality of service as every other.

Has your company grown beyond the capability of simple, dumb WAN connections to ensure quality and reliability of service? If so, it’s time to consider a Smart WAN upgrade for your network operations. You may even find that this improved performance comes at equal or less cost that what you have now.

Click to check pricing and features or get support from a Telarus product specialist.




Follow Telexplainer on Twitter

Tuesday, November 23, 2010

Even Lower Latency Connections To Chicago

If you’ve been wondering just how important network latency has become, note that AboveNet is installing shorter fiber optic routes just to link Chicago and its suburbs. That network will be in operation by mid-next year. Can other cities be far behind?

Latency is the new bandwidth. It’s the scarce resource when you need really, really high performance in your computing environment. Who needs such performance? The big driver has been financial companies that trade on the stock and commodity exchanges. That’s why Chicago, as a major financial center, is being targeted for the AboveNet buildout. New York is another hotbed of activity, with colocation near financial exchanges in high demand. Since we are in a global trading economy, low latency connections to Europe and Asia are also enjoying enormous growth.

Why the need for speed, and why doesn’t more bandwidth solve this problem? Latency and bandwidth are two different animals. Latency is how long it takes a packet to get from point A to point B if there is no interference from other traffic. Bandwidth is the sheer volume of traffic you can handle before packets start piling up at the choke points.

To someone with a too-small WAN network, it might seem that latency and bandwidth are the same thing. That’s because a bandwidth-limited system slows down transmission a lot more than the effect of latency. But once you have enough bandwidth to keep up with the amount of traffic you can actually send and receive, then the network is going as fast as it can. Increasing bandwidth won’t make those packets fly down the line any faster. They’re inherently limited by the speed of light and electrical delays within the circuitry.

You get a dramatic feel for the effect of latency when you watch those television interviews from locations thousands of miles away. They connect to the studio using microwave trucks that beam the signal to a geostationary satellite 22,236 miles above the equator. The signal goes from the truck 22,236 miles up to the satellite and then comes back down 22,236 miles to the television studio. That’s a quarter of a second delay one way or about half a second for a conversation at a minimum. You find it either comical or annoying to listen to the reporters at both locations tripping on each other.

How do you reduce that latency? Take as much equipment out of the path as possible and make the path as straight a line connection as possible. Not much you can do with a geosynchronous satellite. They only work at that altitude. Just don’t try to use one for VoIP telephony or any sort of interactive process or you’ll get frustrated quickly. Fiber optic cables are the high bandwidth connection of choice, but even they are not created equal. Some networks, like the Internet, may take a circuitous route to get packets from one location to another. The Internet was designed to get the packets delivered even under multiple fault conditions. It wasn’t designed to get them delivered particularly fast.

What’s better? Privately run fiber optic networks, like national and international MPLS networks, do a decent job. Even these are designed for normal business requirements and aren’t optimized for minimal latency. What you need are networks specifically designed to minimize latency. They feature very straight runs from location to location, very few switches or routers on the path, and termination as close to the users as possible.

High frequency trading has highlighted the need for low latency networks, but as business moves more and more into the cloud, other processes will drive their own needs for this level of performance. Disk mirroring and data replication are two applications that already benefit from lower latency connections. Cloud computing over multiple locations could easily have the same requirement.

Are your business processes latency sensitive? If so, you should seriously consider the newer low latency network services offered by AboveNet and other competitive carriers. There are microseconds, even milliseconds to be saved.

Click to check pricing and features or get support from a Telarus product specialist.




Follow Telexplainer on Twitter