TeleDynamics Think Tank

Beyond the cloud: how edge and fog computing power modern communications

Written by Daniel Noworatzky | Sep 3, 2025 2:23:00 PM

When you place a VoIP call or join a video meeting, the experience feels seamless—your voice and video just work. Behind the scenes, however, an invisible framework of computing models is orchestrating that experience, ensuring low latency, high reliability, and scalable performance.

Three of the most important concepts driving this evolution are cloud, edge, and fog computing. Each represents a distinct way of deploying compute power, and understanding their differences is key to building resilient VoIP and UC systems.

In this article, we demystify these models, explain how they shape the future of communications technologies, and show you where each one fits best in real-world deployments.

First things first: what we mean by “compute”

Before we break down how cloud, edge, and fog computing work, it’s worth pausing to clarify what we mean by compute. The term compute refers to computational processing resources. For a long time, this meant using central processing units (CPUs) to run algorithms and perform operations on data. However, over time, graphics processing units (GPUs) have also become a mainstream provider of compute resources. Their unique operating method makes them ideal for certain computational processes. For instance, PCs, laptops, and other devices with screens use GPUs as part of the graphics card to process and generate the visuals we see. Recently, GPUs have also been extensively used as the powerhouse behind the deployment and operation of AI data centers.

Traditionally, we thought of compute resources as monolithic and inseparable from our PC or server, accessible only to processes within our device. But consider this: your computer’s CPU is not always functioning at peak capacity. There are a lot of CPU cycles that remain unused. Indeed, the CPU on your computer remains idle more often than not.

Decoupling compute

What if that unused potential could be leveraged by other processes running on different entities elsewhere on the network? In proper data center jargon, those processes are called “workloads.” It would be advantageous to “decouple” compute resources from the rest of the device and make them available to workloads running on nearby devices. This decoupling would enable a more granular and efficient usage of the compute resources.

Data center compute design

One of the primary goals of modern data centers is to decouple compute from the underlying infrastructure and pool capacity so that multiple workloads can draw from it dynamically as needed, thereby maximizing the overall utilization of the entire compute resource pool.

So, with this understanding of compute usage, we are ready to get into fog, edge, and cloud computing.

Fog, edge, and cloud computing

If you have decoupled your compute resources from your devices, you are free to move them to any physical location. In computing, the terms fog, edge, and cloud primarily refer to where those computing resources exist. Let’s take a closer look at each model.

Monolithic computing

For comparison purposes, let’s begin by defining what I call “monolithic” computing, which is simply the use of compute resources within a single device. The following diagram illustrates this.

The end device has a CPU and a GPU, which are the compute resources. The workloads or processes that are leveraging these resources are local to the end device. The compute resources cannot be offered to other devices, nor can workloads of this device use compute resources from other devices.

Cloud computing

Cloud computing decouples compute resources and places them on the cloud. End devices send their requests, which contain the data that must be processed, to the cloud infrastructure. The cloud infrastructure then responds to the request by processing the data, executing the workload, and sending back the results. The following diagram illustrates this process.

Depending on the nature of the workloads being processed, this model may require a robust internet connection with high bandwidth and low latency. Since all workloads must be transmitted to the cloud, you must factor in the transit time for requests, processing, and responses.

One example of cloud computing is software as a service (SaaS), where applications are delivered over the network. Many mobile applications, as well as computer software, use this model.

Edge computing

Edge computing strives to resolve one of cloud computing's limitations: the time it takes for requests and responses to traverse the network. No matter how fast your internet connection is or how low the latency, it still takes a non-trivial amount of time to reach the cloud, and this time can only be reduced so much.

Edge computing seeks to bring computing resources physically closer to the end device, or to the “edge” of the network, thus reducing this traversal time. The following diagram shows how a portion of the local data center is reserved for edge computing, thus processing workloads within the local network.

Edge computing can coexist with cloud computing. Non-time-sensitive processing can occur on the cloud while more intensive, time-sensitive workloads are served locally.

Edge computing is an infrastructure that is deployed less often for conventional end devices like PCs and laptops, and more often for more specialized connected “things” such as smart sensors, machine-to-machine communication, IP cameras, and automation equipment. However, as you will see shortly, VoIP and unified communications (UC) are prime candidates to benefit from edge computing.

Fog computing

Fog computing goes a step further than edge computing. Compute resources are distributed along a continuum from the device to the cloud. For a given workload, you will find compute resources processing various components of it simultaneously at multiple layers. The following diagram shows how fog computing takes place.

This diagram illustrates compute resources at the local gateway device, at a “micro” data center located on the premises, at a regional data center, and at the cloud level. And even the cloud itself is distributed throughout the world with points of presence in various locations, helping to bring compute resources closer to the end user. Fog computing coordinates all these layers of compute resources, ensuring that the workloads requiring the fastest response times are processed as close as possible to the end device. Meanwhile, less time-critical workloads can be served by resources that are farther away.

Some applications that leverage fog computing include smart grids, smart cities, connected vehicles, healthcare, and retail applications. These applications are particularly relevant to smart “things” in the realm of the Internet of Things (IoT).

Where compute happens in UC and VoIP applications

VoIP and UC are real-time, time-sensitive services that benefit from placing computing power where it best serves latency, quality, and resilience. Workloads of different types are processed at different locations, to take advantage of all three compute models.

UCaaS platforms in the cloud handle call control, voicemail, recording, directories, and analytics at scale, with global reach and flexible capacity.

At the network edge, branch session border controllers (SBCs), media relays, and multi-access edge computing (MEC) nodes perform latency-critical tasks such as jitter buffering, NAT traversal, codec transcoding, and E-911 routing. These nodes also employ “survivability” features that kick in in the event of a WAN failure, ensuring that media remains local and responsive.

VoIP and UC have various entities that can participate in a fog computing arrangement. Endpoints, branch gateways, regional selective forwarding units (SFUs, commonly used with WebRTC application) and media servers interact with each other. This interaction is coordinated by choosing the best place to compute each function. Some of these operations result from the initial design of the voice or UC deployment, while others occur dynamically. All of these operations balance quality, cost, and compliance.

Conclusion

Cloud, edge, and fog computing each offer distinct advantages for VoIP and UC deployments. By placing workloads where they perform best – whether in the cloud for scale, at the edge for speed, or across fog layers for balance – organizations can reduce costs, improve resilience, and deliver consistent call quality. Understanding these models is foundational to building communications platforms that are reliable, efficient, and ready to scale with tomorrow’s demands.

 

You may also like:

Unlock scalable VoIP and UC deployments with VXLAN technology

Why MPLS still matters for real-time communications

From GPUs to megawatts: the new demands of AI on infrastructure