HD and Beyond: Monitoring Client Video Throughput

​In 2016 it’s more or less a given that pretty much all of your residential clients are doing some kind of media streaming in order to enjoy their favourite movies or shows.

That means that the quality of the viewing experience that you provide as an installer is increasingly being tied to and impacted by the standard of networks that you install.

Netflix, for example, recommends a whopping 25 megabits per second to move 4K over the network, which means that anything that squeezes your bandwidth to be less than this will jeopardize optimal viewing quality.

Below are the Internet download speed recommendations per stream for playing movies and TV shows through Netflix:

0.5 Megabits per second - Required broadband connection speed
1.5 Megabits per second - Recommended broadband connection speed
3.0 Megabits per second - Recommended for SD quality
5.0 Megabits per second - Recommended for HD quality
25 Megabits per second - Recommended for Ultra HD quality

What could squeeze your bandwidth you ask? Great question. There are mainly two things which are going to take (what you thought) was your super high-speed network down a few notches. One is latency and the other is packet loss. Being alerted when either of these two factors is bringing down your customer’s download speed is important to know. It could be the difference between spending an hour on site to troubleshoot the problem or not going to the client’s site at all and fixing the problem before they even notice there is an issue. Not only does this kind of network speed monitoring help to troubleshoot HD video issues, it is actually a great value-add service you can charge your clients a monthly fee to use.

What is Network Latency?
For the purpose of our discussions, consider Network Latency to be the time it takes for a network packet to travel from one device to another.  Latency is much like the time it takes for your voice to travel from your mouth to the ear of the person you are speaking to.

A more complicated definition of latency will begin to take into account many things that are well beyond the scope of this exercise, including things like “jitter,” which measures how much variation there is in packet delay over time.

Where Does Latency Come From?
Latency is a cumulative effect of the individual latencies along the end-to-end network path. This includes every network segment along the way between two devices (like a switch or access point).  Every segment represents another opportunity to introduce additional latency in the network.

Network routers are the devices that create the most latency of any device on the end-to-end path. Additionally, packet queuing due to link congestion is often the culprit for large amounts of latency. Some types of network technology such as satellite communications add large amounts of latency because of the time it takes for a packet to travel across the link.  Since latency is cumulative, the more links and router hops there are, the larger end-to-end latency will be.

We are asked by customers regularly about what is acceptable network performance and what can be done to improve things when performance is sub-par. The unfortunate answer is that there will not be able to offer you a Silver Bullet to all network issues. Instead, let’s start by understanding latency a bit better.

What Happens with High Latency?
Let’s get a little more technical.

TCP (Transmission Control Protocol) traffic represents the majority of network traffic on your local network. TCP is a “guaranteed” delivery protocol, meaning that the device sending the packets gets a confirmation for every packet that is sent.  The receiving device sends back an acknowledgment packet to let the sender know that it received the information.  If the sender does not receive an acknowledgement in a certain period of time, it will resend the “lost” packets.

For simplicity, let’s call that period of time that the sender waits before re-sending packets the “window size.” While the sender is re-sending packets, it is no longer sending new information. The window size is adjusted over time and tightly correlates to the amount of latency between the two devices.

As latency increases, the sending device spends more and more time waiting on acknowledgements rather than sending packets!

Does Latency Really Affect Anything?
Since the window size is adjusted upwards as latency increases, there is a direct inverse relationship between latency and throughput on the network.  Lets look at an example of two devices that are directly connected via a 100Mbs Ethernet network (nothing in between).  The theoretical max throughput of this network is 100Mbps.  Take a look at what happens to that throughput as latency increases.  The results were obtained by using a latency and generator between the two devices.

Round trip latency                                      TCP Throughput

0ms                                                                 93.5 Mbps

30ms                                                              16.2 Mbps

60ms                                                              8.07 Mbps

90ms                                                              5.32 Mbps

Notice how drastic the drop in throughput is with round trip times as low as 30ms!

What a Pile Up!
So what happens to all of those packets when they get “lost”? A compounding factor for the problem of lost packets is that they will have to be resent, which then increases the overall data being transmitted.  After the sender sends the data, it will sit idle until it gets a confirmation back from the receiver as to the status of the packets. In some cases, the packets that get lost might even be the acknowledgement back from the receiver, meaning that the sender will be re-sending information that was already sent successfully.  The simple result is an even further degradation of the throughput. These actions have a much bigger impact on the network bandwidth than you may have believed.

Let’s use a packet loss generator and introduce just a 2% packet loss and evaluate the impact.

Round trip latency    TCP Throughput with no packet loss  TCP Throughput with 2%                                                                                                                 packet loss

0 ms                                     93.50 Mbps                                                   3.72 Mbps
30 ms                                   16.20 Mbps                                                   1.63 Mbps
60 ms                                  8.07 Mbps                                                      1.33 Mbps
90 ms                                  5.32 Mbps                                                      0.85 Mbps

Again, consider the requirements for Netflix to stream content. In order to get even SD quality you will require at least 1.5-3.0 Mbps. In this case, even 2% packet loss and only a 60ms round trip time, users will be below the recommendation!  (Even if nothing else is happening on the network)

What Latency Should We See?
There is not a standard answer to determine what kind of network latency you should see on a customer’s network. There are many different factors which could impact what an acceptable latency will be. This is just a guideline.

A round trip latency of 30ms or less is healthy. Round trip latency between 30ms and 50ms should be monitored closely.  Consider looking deeper at the network for potential issues. Round trip latency over 50ms quires immediate attention to determine the cause of the latency and potential remedied.  Continue monitoring to track improvements.

The most important effort you can to take for your customers is simply to monitor network speeds over a month.

Can I Lower Network Latency?
Again, a lot of factors at play here, so here is a broad swipe.

If you see high latency while monitoring a customer’s network speed, you first need to identify the full path of communication between the monitoring appliance and the device in question.  Next, you will want to look at the reported latency between the appliance and any of the devices that are in the path.  If one or more devices appear to be contributing significantly to the latency, you should begin to devise strategies to test possible changes that might improve performance (i.e., firmware, wireless signal, etc.). At this point, you may want to use a computer on the local network to do more granular tests like ping and trace route.

Steve Muccini is Director of Marketing for Ihiji focused on building brand, creating content, lead generation and enabling the sales team. Steve brings more than 20 years in technology marketing to this growing remote technology management business.

Article Categories






Most Viewed