TeleDynamics Think Tank

How to prepare your network for the demands of AI

Written by Daniel Noworatzky | Sep 17, 2025 2:17:00 PM

Most networks weren’t built for AI. The question is: will yours keep pace with the surge in bandwidth, compute, and security demands, or fall behind? AI workloads push infrastructure further than traditional data center design ever anticipated, and the organizations that adapt fastest will gain a competitive edge.

In this article, we explore the key network considerations every business must address to prepare for an AI-driven future.

Don't let resource requirements sneak up on you!

Network infrastructure design trends have been largely predictable over the past few decades, but AI-driven data center and network design is pushing requirements far beyond the usual. In this environment, there is a danger that if network capacity and resource advancement are neglected, you could be planning for a fantastic AI-centered future for your company, only to realize too late that your network infrastructure is inadequate to support it.

Here are some areas you should focus on to ensure that the development of your network doesn’t fall behind in the effort to get your organization AI-ready.

Edge network provisioning

AI workloads are exceptionally resource-intensive. Not only do they consume unprecedented amounts of compute, storage, and memory resources, they are also extraordinarily network-intensive. AI workloads are almost exclusively processed within specialized AI data centers, either on the cloud or on specialized edge-network infrastructure; they are virtually never processed locally.

As a result, the network is the conduit between the AI workload requester and its execution, making it a mission-critical portion of the whole infrastructure. It must be reliable and should also deliver the capacities required by the ever-increasing bandwidth being requested by AI processes. This is illustrated in the following diagram.

As network capacity needs increase, connections to the “outside world” (such as remote branch sites, other private networks, and the internet itself) can quickly become overtaxed. Thus, the network edge can become a potential bottleneck for all network traffic destined off-site, including AI workloads! For this reason, edge network provisioning must be done correctly.

Capacity

First, provisioning the edge network for AI-centered use involves ensuring sufficient bandwidth. Connections to remote sites, private networks, third-party networks, and the internet must be sufficient to serve the expected peak network traffic. Edge network devices such as routers, switches, VPN gateways, SBCs, and firewalls must have the computational capacities necessary to process the expected traffic volumes. AI workloads can cause an unanticipated increase in network traffic, so these considerations should be appropriately factored into the network traffic predictions.

Redundancy

Network edge capacities are usable only as long as an edge connection is operational. For this reason, redundancy at the edge is an essential part of network design—not only for AI but in general. However, as mission-critical applications and services become AI-dependent, redundancy becomes increasingly significant.

Edge network design

An edge network design that incorporates various approaches can achieve both capacity and redundancy. These include choosing the appropriate WAN technologies, employing enhanced connectivity methodologies such as SD-WAN and MPLS, and even using options such as wireless bridging where necessary.

Edge vs. cloud AI processing

Larger organizations should consider establishing some AI workloads internally within their own networks. A hybrid AI architecture can deliver integration between on-prem AI clusters and cloud AI services. The following diagram depicts such an arrangement.

Placing AI data center resources on site provides several advantages for both the network and the organization’s AI approach overall. This can be considered a form of “edge computing” where compute power is placed physically closer to the requester of the workload. This is of benefit to all types of workloads but especially AI workloads, which are several orders of magnitude more intensive.

This, of course, improves upon response times as a significant percentage of the latency for the requests and responses is reduced, but this is not the only benefit:

  • Data sovereignty and confidentiality: Local AI infrastructure allows you to safely run AI workloads on your own private datasets, including customer information, product features, financial records, and service details, letting you create customized workloads specific to your organization. You don’t have to share this information with any third-party AI provider, so employing an air-gapped infrastructure further strengthens security.
  • Control: You can choose and tune the hardware, interconnects, and software for your workloads. There are no shared-tenant limits, throttling, or surprise deprecations. You keep end-to-end policy authority (i.e., security, data residency, access, and maintenance windows), so compliance and change management happen on your schedule.
  • Cost efficiency: At scale, it has been shown that local AI infrastructure with owned hardware can beat out cloud total cost of ownership (TCO), especially for steady, high-utilization workloads. This includes savings in network egress fees as well.

Using on-prem AI infrastructure options in combination with cloud-based AI offerings allows you to create a hybrid AI environment, enjoy the benefits of both worlds, and adjust to find the perfect combination that matches your organization’s needs.

Power and cooling requirements for on-premises AI infrastructure

When employing on-prem AI infrastructure, it is vital to ensure that your internal network and data center infrastructure conform to requirements like structured cabling and data center physical layer design for both copper and fiber cabling. But that’s not all. You must also maintain a reliable supporting infrastructure, including rack space and cabling, as well as reliable power and cooling. Security and compliance are also key considerations.

Power

Providing reliable and uninterrupted power for data centers is a science in itself. Redundant power sources, uninterruptible power supplies (UPSes), and ready-to-run diesel generators are the must-haves. As AI workloads become increasingly mission-critical, power outages or electrical failures should not result in downtime under any but the most unprecedentedly devastating circumstances.

Cooling

Cooling is another vital area for AI. Data centers designed to run AI workloads are typically served by specialized AI computational units such as the Nvidia GB200 NVL72, which contains 72 GPUs and 36 CPUs. The extreme CPU/GPU densities in these self-contained AI supercomputers require internal liquid cooling systems to efficiently dissipate heat from their processors.

This internal liquid cooling system may remove heat from the processors themselves, but what is then done with the heat depends upon the available infrastructure. Ideally, the liquid cooling system should lead to a coolant distribution unit (CDU) that carries away that heat, ejecting it directly from the liquid coolant to the external environment. However, for this to take place, the required coolant distribution facilities must be present.

In most enterprise data centers that use conventional cooling infrastructure, such systems are not readily available. AI computational units can be retrofitted with alternative methods, including rear door heat exchanges (RDHx) and liquid-to-air sidecars, which are alternative ways of ejecting heat. Neither of these solutions is as efficient as a CDU and can limit the achievable GPU/CPU densities in the same physical space. Ideally, these solutions should be used as a transitional stage before obtaining a full-fledged CDU infrastructure.

Security

An on-premises AI infrastructure must be treated as a high-value enclave. It should be segmented from the rest of the network, and strong identification and authorization, including MFA and short-lived credentials, should be employed. Training data, model artifacts (files and metadata produced while training a model), and sensitive data at rest and in transit should be protected using appropriate industry-standard encryption.

Here are some additional network infrastructure best practices to help secure on-prem AI infrastructure:

  • Use next-generation firewalls (NGFWs), API gateways, and web application firewalls (WAFs) to terminate all ingress sessions and enforce strong authentication, authorization, and rate limiting so that only validated traffic reaches backends. Force all egress through a proxy with DNS filtering and data loss prevention (DLP) to control outbound data flows, block malicious destinations, and prevent unauthorized data transfer outside its trusted boundary (data exfiltration).
  • Within the data center, apply zero trust principles, including strong identity, least privilege micro-segmentation, and continuous verification/attestation for nodes and workloads.
  • Out-of-band (OOB) management for AI infrastructure should be limited to its own separately segmented network.
  • If you integrate voice, UC, or RTC into your AI scheme, put a session border controller in a DMZ to terminate SIP/TLS/SRTP. Put strict, layered controls in front of (and around) the AI services so no one can reach them directly, and only validated, least-privilege traffic gets through.

Compliance

Compliance checks that you apply the proper rules to the data you use and how you process it. Aligning with the relevant standards, such as HIPAA/PCI for regulated data and ISO 27001/NIST 800-53 for control baselines, ensures adherence to those rules. In addition, enforcing data governance (classification, minimization, access controls, retention/erasure, and auditable logging) aids in proving that compliance.

Conclusion

AI has reshaped networking’s trajectory for good. To harness its value — whether in the cloud, on premises, or using a hybrid arrangement — organizations must modernize their networks to be secure, observable, automated, and compliant by design. Understanding the concepts involved will help companies adapt, gain stability, expand capability, and meet regulatory demands with confidence.

 

You may also like:

Building AI-ready data centers: what you need to know

From GPUs to megawatts: the new demands of AI on infrastructure

AI-driven network infrastructure: the future of UC systems