Showing posts with label speed of light. Show all posts
Showing posts with label speed of light. Show all posts

Monday, March 11, 2013

What Makes Low Latency Bandwidth Important

Bandwidth is bandwidth, right? What’s really important is how many bits per second you can squeeze down the line, isn’t it? Or is it?

Lower latency bandwidth solutions are available for your business needs...The speed of your connection is one important specification in a bandwidth solution that will meets the needs of your company. Others include availability, packet loss, jitter and latency. Don’t understate the importance of latency in providing a satisfactory WAN service.

What is latency and isn’t it related to bandwidth? In some cases, yes. Bandwidth increases latency when you don’t have enough for the job. If you have a huge file, say a radiology image, that you need to transfer to another location in 10 minutes and it takes 10 hours, you’ll feel the pain. If your VoIP calls are getting derailed by employee internet browsing, you’ll hear the pain. If your video conferencing breaks up so badly that it is unusable, your employee productivity can really be a source of pain.

Latency is the delay between sender and receiver. If you have all the bandwidth your equipment can possibly use, that delay is a function of how fast the packets can get through the line unimpeded. What slows them down is the speed of light in wire and fiber and any delays introduced by routers, switches and amplifiers along the way.

So, how is it possible to have plenty of bandwidth but too much latency? The perfect example is two-way satellite transmissions. You’ve no doubt noticed that on-location live TV reports seem to have a delay that prevents normal two-way conversations. The anchor and the reporter have to each pause before talking or they’ll talk over each other. That’s latency. Most of it is caused by the simple fact that the geostationary satellite relaying the signal is located about 22,000 miles overhead. Even in a vacuum it takes light and radio waves a millisecond for each 186 miles of distance or a full second for each 186,000 miles. This results in a minimum delay of a quarter-second for a one-way transmission or half a second for a round trip.

Is there a technical way to reduce this latency? Nope. As long as we use electromagnetic waves we’re stuck with Einstein’s speed limit. If you want lower latency, you need to use shorter paths.

One way that carriers are reducing latency is by establishing point to point connections that run in as straight a line as possible. This can mean new fiber installations with minimal length between cities. It also means removing as much electronics from the path as possible. Each box adds a little latency as the signals are converted from light to voltage to light again. Of course, bandwidth has to be high enough that it appears to be transparent to the application.

An extreme example of lower latency requirements is high speed financial trading. It’s computers making trade decisions and placing the orders far faster than a human broker possibly could. Half a second delay is eternity to such systems. Every millisecond counts when you are issuing hundreds or thousands of buy/sell orders every second. Even an optimized fiber optic link from out of state or across national borders may have too much latency. The optimized solution is to move your computers into the same data center as the trading floors. This gets the delay problem from milliseconds down to microseconds and nanoseconds.

You may not need such highly optimized latency reduction solutions for your business. However, you should be aware that cloud services can become latency limited. Any signal delay through your own company LAN is likely unnoticeable. Stretch that connection out a thousand miles or two over the Internet and you may experience noticeable response delays in interacting with your applications. If you can’t live with the performance of the public Internet, you need a latency optimized bandwidth solution.

Big fiber optic carriers like Zayo, MegaPath and XO communications offer high bandwidth low latency private line connections that can improve the performance of your cloud based applications. You’ll also need this for high quality VoIP telephony and video conferencing. Be sure to investigate the impact of latency on your SIP Trunks and multi-location networking through MPLS. Low latency solutions are available for just about every need.

Do you need lower latency as well as higher bandwidth? Get competitive pricing on low latency high speed bandwidth solutions for local, interstate and international connections.

Click to check pricing and features or get support from a Telarus product specialist.



Follow Telexplainer on Twitter

Wednesday, May 11, 2011

Telx Cloud Exchange Reduces Service Lag Time

The idea behind cloud services is that you can shut down your local data center and rent everything you need in the way of infrastructure, platform and software from a service provider somewhere out there. You’ll save a ton of money and the users will never know the difference. Except some of them do notice a difference. Things can bog down in the cloud.

Colocation reduces latency.How can that be? One premise of cloud computing is that there is a near infinite well of resources to draw from. You need more processing, you bring it online almost instantly. You need more disk storage, you take it. After all, there is plenty to go around. So how can responses that were snappy when the data center was in the basement become sluggish when the same facilities are in the cloud?

It all comes down to connectivity. Everybody knows that electrical signals move at the speed of light, right? You might guess that’s so fast that it shouldn’t make any difference if the wire is a hundred feet long or a thousand miles. At the speed of light a human can’t possibly detect the travel time of electrical impulses over wires and fiber optic cables. That’s right, isn’t it?

It sure sounds nice in theory, but in practice the speed of light isn’t infinite and the speed of communication signals never gets near light speeds anyway. Remember that the speed of light so often quoted is in a vacuum. That’s 186,000 miles per second when your laser beam is shooting through space. On terrestrial circuits, 186,000 miles per second translates to 186 miles per millisecond or 10 milliseconds for 1,860 miles. Those signals can’t even go that fast because any medium slows them down. You’ll be lucky to go 2/3 as fast, or a millisecond for every 124 miles.

Are we forgetting something? You bet. There’s no such thing as communicating over one long strand of pure wire or fiber. There’s circuitry at both ends and amplifiers, regenerators, add-drop multiplexers and other equipment in-between. Those will add milliseconds or tens of milliseconds more.

That’s still nothing compared to what happens when packets are routed on the Internet. They get from point A to point B alright. But they seldom go in a straight line. They go from router to router to router to router and eventually to the destination. There’s no guarantee the next packet will take the same route as the last one. There’s also no guarantee that the packet will even get there intact. Oh, one is missing? TCP/IP will resend it and all will be well. The file being transferred will certainly be intact at the other end, but how long did it take to replace all the lost packets and wait for traffic jams congesting certain nodes?

Cloud providers and companies sensitive to lag time, also known as latency, are taking a close look at colocation to have the shortest and most direct communications paths possible. A step beyond even standard colocation facilities is the new cloud exchange service from Telx. It’s branded cloudXchange and it may be the future of data centers.

Telx’s breakthrough comes in inviting cloud service providers to move in with them, literally. A service provider can locate their infrastructure in one or all 15 Telx facilities. What they gain is access to a wealth of carriers who have created point of presence within the Telx facilities and major corporations, content delivery networks and others who are just down the hall in the same building. For long haul connections, Telx has access to low latency fiber routes between data centers and to worldwide destinations.

This may be where we’re all headed. Instead of every company having its own server racks connected directly to the corporate LAN, most infrastructure will be outsourced to a cloud service provider or collocated in the same building to form a hybrid cloud. User connectivity will be over high speed dedicated lines, perhaps just to the nearest colo facility where service providers will have a portion of their infrastructure. A separate Internet access path will be available to browse the Web, share email with outsiders, and connect with consumers.

Are you a user or provider of cloud services who is unhappy with their networking connections? Perhaps you can benefit from an upgrade to higher bandwidth, lower latency connectivity to get rid of the lags that are plaguing your business processes.

Click to check pricing and features or get support from a Telarus product specialist.


Note: Photo of data center servers courtesy of WikimediaCommons



Follow Telexplainer on Twitter