Skip to content

TeleDynamics Think Tank

Essential steps for applying QoS on UC networks

Posted by Daniel Noworatzky on Aug 28, 2024 10:08:40 AM

Quality of Service Image with tech in server room - TeleDynamics blog

Network services such as voice over Internet Protocol (VoIP), unified communications (UC), video conferencing, and collaboration systems are fundamentally dependent on the quality and robustness of the underlying network infrastructure. When deploying an enterprise network, it is crucial to implement sound network design principles to guarantee optimal performance, reliability, and scalability.

Various aspects of enterprise network design work together to achieve high levels of network performance. This article focuses specifically on quality of service (QoS) and some best practices associated with it.

What is QoS?

The term “quality of service” encompasses a series of mechanisms and design features to prioritize specific types of network traffic over the network. In the event of network congestion, the transmission of this priority traffic will take precedence to avoid slowdowns.

This process is especially important for time-sensitive network traffic types, such as VoIP, UC, and videoconferencing services, because audio and video traffic is particularly sensitive to packet delay, packet loss, and jitter.

Enterprise network design

A somewhat misguided approach to addressing network congestion for time-sensitive services is simply overprovisioning the network. The idea is that by increasing link bandwidths sufficiently, one can avoid network congestion, so QoS becomes unnecessary.

Although this seems logical, it is both costly and unfeasible. Network congestion is a dynamic phenomenon influenced by various unpredictable factors that can occur despite efforts to overprovision.

These factors include dynamic user traffic patterns, the rapid introduction of new network applications, untimely network server backups, and the misconfiguration or malfunction of Layer 2 loop-prevention mechanisms like the Spanning Tree Protocol (STP) or Layer 3 routing protocols.

The nature of network traffic is such that even with significantly large data pipelines, network congestion can still occur. Therefore, you should implement QoS end-to-end on all devices within an enterprise network regardless of the network’s capacity.

The nature of today’s networks

A network comprises intermediary network devices such as routers, switches, firewalls, and wireless access points. The interconnection of these devices forms the network fabric that serves the traffic sent by end devices.

Communication over such networks is exclusively based on the TCP/IP model, leveraging either the long-established IPv4 protocol or the up-and-coming IPv6 Layer 3 protocol. In both cases, communication takes place over a packet-switched network.

This means that the data being sent — whether an email, part of a website, a voice conversation, or a videoconference — is broken down into small segments and sent in a series of packets across the network.

When the amount of traffic on the network is less than the capacity of the links that interconnect the network devices, all packets that arrive at a network device are served immediately.

However, when a link’s traffic capacity is reached and surpassed, network devices may attempt to queue a certain number of packets within the device’s internal memory in an attempt to successfully serve them on a first-come, first-served basis.

This situation will cause a delay in the arrival of some packets to their intended destinations and may also increase the level of jitter observed. If the queue's capacity is surpassed, packets may also be dropped, resulting in incomplete data reaching the destination.

This is the typical behavior of network devices when congestion occurs, and QoS is not employed. It is not an issue for types of data that are not time-sensitive, such as file transfers, web services, text messaging, or social media posts.

Upper-layer protocols such as TCP and the applications using those protocols will deal with and correct the issues introduced by delayed or dropped packets. Receiving a text a second or two later than expected or having to wait a little longer for a file transfer to complete are tolerable.

However, for a voice or video conversation, packet delays or drops can result in your not understanding your counterpart, which is unacceptable.

QoS components

To apply QoS to a network, administrators must configure the network to identify, mark, and then actively prioritize packets based on the requirements of each packet type. We further describe these components of QoS below.

Classification

By default, all traffic on a network is treated the same. To change this behavior, you must first identify the packets you want treated differently. This first step in implementing QoS is known as classification, which identifies and categorizes network traffic based on predefined criteria. As packets pass through a network device, it examines specific packet parameters to determine their classifications.

Many criteria can used for classification, including network parameters such as source and destination IP addresses or TCP/UDP ports. Classification could also involve a more intelligent process called deep packet inspection.

This allows a network device to discover the contents of a packet, thus determining what application generated it, whether it is part of a voice or video conversation or belongs to a web browsing session.

Marking

Classification itself does not result in any prioritization — it is simply the process by which packets are differentiated based on particular criteria. Once identified, packets must be marked appropriately. This process refers to tagging or labeling packets with specific values that indicate their priority level or class of service.

Marking can occur at either Layer 2 or Layer 3 of the OSI model. Marking values can be placed within what is known as the Differentiated Services (DS) field found in the IPv4 or IPv6 header, or within the VLAN ID tag of an Ethernet Frame. Such markings help downstream network devices like routers and switches identify and differentiate between different types of traffic.

Applying policies

At this point, prioritization has still not taken place. Classification and marking are simply mechanisms that identify and then tag packets based on the criteria set by network administrators. The actual QoS magic takes place with the application of QoS policies.

QoS policies are rules configured within network devices that tell a device how to treat packets with particular markings or values within the DS field or VLAN tag. For instance, packets with higher priority markings might be placed in a high-priority queue, ensuring they are transmitted more quickly than lower-priority packets.

These policies ensure that those packets requiring prioritization are sent immediately, thus guaranteeing that the application using those packets will receive them promptly and will not be affected by slowdowns caused by network congestion.

Caveats and considerations

Consistent marking is necessary from the source to the destination to maximize QoS effectiveness. Marking ensures that all network devices along the path recognize and respect the QoS markings, providing the intended level of service throughout the network.

Any network device along a packet’s communication path may ignore, modify, or even remove QoS markings. This is especially important if your marked traffic is sent to networks under another’s administrative control. Appropriate agreements with third-party networks should be made to ensure that you achieve the QoS thresholds your services require.

QoS is typically not applied or honored on the internet. Any markings that you may place on traffic that eventually traverses the internet will be either ignored or removed.

Conclusion

QoS is a vital yet often overlooked network design concept that is critical for enterprise networks. When applied correctly, essential applications like VoIP, videoconferencing, and mission-critical data services receive the necessary bandwidth and prioritization to function smoothly, even during network congestion.

QoS enhances the overall performance and reliability of an organization’s network infrastructure by effectively managing latency, jitter, and packet loss. As enterprise networks continue to grow in complexity and demands, implementing QoS not just an option but a necessity for maintaining optimal service quality and user satisfaction.


You may also like: 

Nine DHCP options that are particularly useful for VoIP and UC

Unlock the incredible power of IPv6 for VoIP & UC systems

Advances in VoIP reliability

 

 

Topics: QoS, VoIP, Network Security, Network Design, Unified Communications

Comments

Welcome to our Think Tank

In this blog you'll read our thoughts on business telephone systems. While a lot has changed in telecom since TeleDynamics was founded in 1981, we remain as committed as ever to delivering the best customer service in the industry.

If you would like elaboration on a specific topic, please let us know in the comments section.

Happy reading and thanks for stopping by!

Receive New Articles by Email

BiBA-2017-silver-midres
Easy template for creating a network security policy
New call-to-action
New call-to-action