Monday, February 09, 2015

The Cloud Needs Low Latency

By: John Shepler

It makes so much economic sense. Instead of getting a bank loan and making a major capital investment in your own server farm and data center, you simply contract with a cloud service provider and pay only for what you use when you use it. No more having to predict how business will be next year or even next quarter. You can roll with the punches with on-demand cloud services.

Cloud Computing Data Mouse Pad. Get one for yourself now.It Seems So Easy
The solution is deceptively simple. Clean out your current data center or don’t build one in the first place. Rent what you need from one of a myriad of cloud service providers. All you need is a simple link from your facilities to theirs and nobody on the network will know the difference. After all, how can they tell whether the servers are down the hall or across the country? Few users do anything but run the applications anyway.

The Hesitating Network
There’s an old saying that goes something like this: “On the network, the printer on the other side of the country is as close as the one in the next office.”

That’s an ideal. In reality it can either seem just that close or really a thousand miles away. The server three states over works the same as the one in the basement used to. Somehow, though, it’s developed a hesitation. You have to wait for the system to respond now. Worse, yet, that hesitation seems to vary. You really don’t know how the system is going to perform from one day to the next.

The Infinite Cloud That Isn’t
One used to be able to blame system performance lags on lack of resources. When everyone is running big jobs at the same time things slow down a bit. If you were clever, you tried to get your work done at odd hours before the thundering herd got to work or when they went to lunch. Let somebody else deal with the congestion. It’s the equivalent of drive-time traffic and no more fun.

That was one thing the cloud was supposed to fix. A major benefit of cloud data centers is that they have massive resources. Any given tenant can expand or contract the number of servers and the amount of storage they are using in real time. You aren’t supposed to run out of capacity ever. Yet, it seems like the cloud has less capability than you had before. How can that be possible?

The WAN Gotcha
One big thing that is often forgotten is that the printer in the next office and the one across the country have to be connected by something. That something is your WAN network connection. When all of your network resources were sitting on the LAN, the point was moot. There was plenty of capacity and the links were short enough that network performance just wasn’t an issue. You might say that the network became “transparent.”

Now, your network includes both the sporty LAN and metro and wide area connections that can be anything but sporty. In fact, they can be downright sluggish. That’s because there’s a difference between your local networks and the ones that run by telecom carriers. Local networks can be fairly easily engineered to have enough performance to appear transparent. That’s both harder and more expensive to do over long distances.

What’s Holding Back the WAN?
There are a few technical characteristics that spell the difference between transparent and not so transparent connections. These include bandwidth, latency, jitter and packet loss. You want to maximize the first and minimize the other three.

It’s tempting to jump to the conclusion that sluggish network performance is a result of too little capacity. When you start to use most of the available bandwidth most of the time, you can have periods of congestion where there are just too many packets to send down the line at the same time. Easy solution? Increase bandwidth. A double, triple or 10x bandwidth increase should solve the problem immediately… or will it?

One issue that no bandwidth increase can fix is latency. This is a time delay for packets to traverse the network. We think of electrical transmission being instantaneous from source to destination. At the local level, that’s a really good assumption. Once you leave the premises, though, even the speed of light may not be fast enough.

Minimizing Latency
Clearly, one way to minimize the time delay of transmission is to minimize the distance involved. Even at light speed, you can only go 186 miles in a millisecond. That’s 10 mSec for 1,860 miles. In fact, even that isn’t really possible. In real world glass fiber, you may be looking at more like 15 to 20 milliseconds one way or at least 40 milliseconds round trip. If you happen to be using a geosynchronous satellite link for part of the trip, that expands to a quarter to half a second.

Most applications aren’t going to be impacted by a 40+ millisecond delay, although half a second is definitely going to be noticeable and perhaps difficult to live with. This argues for using a terrestrial fiber link to the cloud (which is actually on the ground) and using the shortest path possible. A point to point dedicated private line is most likely the highest performance you can achieve. You may be able to get equivalent or nearly equivalent performance through a privately run MPLS network at a lower cost.

How About The Internet
Oh, yes, the Internet. It sounds like the ideal resource. It goes everywhere and connects everyone. The cost is amortized over so many users that the Internet is going to be your low cost solution. Security is certainly an issue, but encryption can create tunnels that give you a virtually private network that emulates a truly private network.

That emulation may fall short, not for security reasons, but in performance. The Internet is not engineered to minimize latency or jitter. It’s architecture is designed for resilience. If you lose a connection between servers, the network will automatically reconfigure to heal itself and keep the traffic moving. Unfortunately, that means your packets may take different paths on different trips. They may also encounter bottlenecks if particular nodes get overloaded.

Does this mean you can’t use the Internet to connect to the cloud? For maximum performance or highly sensitive interactive and real-time applications, you’ll do better with private lines. Otherwise, you might be satisfied with a private/public hybrid called DIA or Dedicated Internet Access. You still share the high performance backbone of the Internet core, but you connect via a private line that minimizes latency, jitter and packet loss during that critical first mile.

Finding a Better Cloud Connection
There is a range of cloud connectivity options available for most business locations. You should look at the cost/performance tradeoffs involved with each of these and then choose the link you need to make your connection to the cloud as transparent as you need.

Click to check pricing and features or get support from a Telarus product specialist.

Follow Telexplainer on Twitter