Skip to content

TeleDynamics Think Tank

Building AI-ready data centers: what you need to know

Posted by Daniel Noworatzky on Jul 2, 2025 10:02:00 AM

Inside a data center - TeleDynamics blog

As AI continues to dominate the headlines, discussions abound about the infrastructure needed to support it, putting data centers at the heart of the conversation. Facilities of all types are drawing attention, not only for their enormous scale and the exorbitant power and resources they consume, but also for their promising potential for a wide range of services and applications.

It is vital for anyone involved with data center deployment to understand what drives data center design for AI. In this article, we dig deeper into data centers for AI and examine a particularly interesting real-world scenario that will help shed light on these emerging patterns.

In a previous article, we described key characteristics of data centers that are purpose-built for AI. We compared more conventional facilities with their AI counterparts and observed how they differ and why. This was an initial step to get our feet wet as we wade deeper into the waters of this novel area of expertise. If you haven’t already done so, a quick review of that article will help you better understand this one.

Understanding AI workloads                                                                                                           

In the context of a data center, a workload is a set of specific computing tasks and operations performed by a system or group of systems. It includes applications, processes, and services that consume compute, memory, storage, and network resources. Practically speaking, such tasks can include things like adding a database entry, serving a web page request, processing a set of data, or running an application on a virtual machine or container.

AI data center workloads are unique and fundamentally different from more traditional workloads, and this difference is at the heart of the vast resource requirements these data centers demand. AI workloads fall into two broad operational categories: training and inference.

AI training workloads

AI training involves feeding massive amounts of data – including images, text, audio, video, and various types of datasets – through what are known as deep learning models. These models are machine learning processes that use multiple layers of what are known as artificial neural networks; essentially, these are processing entities that loosely mimic the human brain.

These multiple layers of artificial neural networks process vast amounts of data to automatically learn and extract complex patterns. The process is highly iterative and transforms this data into increasingly abstract yet useful representations. This bestows upon the AI model the ability to perform highly complex and conceptual tasks such as image recognition and natural language processing.

The AI training process has several characteristics that result in exceptionally high resource demand:

  • Artificial neural networks involve very high computation volume. Each “neuron” performs matrix multiplications and non-linear operations, which, in themselves, are highly demanding calculations. A large-scale AI model may have several billion neurons operating in parallel. These computations, carried out by billions of neurons, are further multiplied across billions of parameters and numerous layers of artificial neural networks.
  • Deep learning models thrive on data. Processing such massive amounts of data involves extensive operations, memory usage, storage, and data shuffling, which increases the burden on these systems.
  • Parallelization and synchronization are employed extensively. This involves distributing workloads across a vast number of processors, which, in turn, increases the need for more processors and networks of higher speed and capacity.

Even if the technical details above are unclear, you can still grasp, at least partially, why neural networks require such massive amounts of resources. They are data-hungry, compute-intensive, and rely on complex iterative processes across many layers and parameters. And the resource needs increase further as AI models become more complex and datasets grow.

Inference workloads

Where training workloads educate a model, so to speak, inference workloads query that trained model to make predictions and respond to new data. Practically speaking, an inference workload is initiated when you log into a language model interface such as ChatGPT and ask questions, upload an image to be analyzed, or send a dataset to be processed.

These inference tasks are generally less resource-intensive than training. However, when inference is performed at scale, such as real-time chatbots serving thousands of users simultaneously, high throughput and low latency become essential requirements for delivering a seamless experience.

Abilene AI data center: a case study

As AI workloads push data center design to new extremes, real-world implementations help illustrate how theory translates into practice. One standout example is the AI-focused data center in Abilene, Texas, which is a three-and-a-half-hour drive from the TeleDynamics offices in Austin. This project has gained attention for its scale, specialization, and strategic significance.

The Abilene AI data center is part of the Stargate Project, a $500 billion national AI infrastructure initiative spearheaded by OpenAI, Oracle, and SoftBank. It is one of the largest data centers resulting from this initiative and is currently under construction.

Abilene AI data center construction (source: https://crusoe.ai/) - TeleDynamics blogPurpose

The primary purpose of the data center is to lease out AI processing resources to host large-scale training and inference workloads for AI, including natural language processing and computer vision applications.

Facilities

According to the primary developer, Crusoe Energy Systems, the first two buildings will be energized in 2025. Together, they will offer almost one million square feet of facility space and draw over 200 megawatts of power. The two buildings will sport over 7.2 million GPUs between them (i.e., 100,000 Nvidia GB200 NVL72 systems, which contain 72 GPUs each).

The next phase of construction, which is expected to be complete in mid-2026, will see an additional six buildings, providing a total of over 4 million square feet of space and a total power capacity of 1.2 gigawatts. This 1.2 GW refers to the maximum amount of electricity the data center can consume at any given moment, which is huge: it’s comparable to the energy demand of Austin, Texas, with a population of 1 million. But it doesn’t end there. Throughout the facility's lifetime, which is expected to exceed several decades, power consumption is expected to surpass 10 GW as demand and development continue to grow.

Location

The choice of location for the data center was a very strategic one. Abilene, Texas, is a central location on the continent, has access to clean power, and is in proximity to major fiber routes. The data center is located on a plot of land that has been reported by various sources to be anywhere between 875 and 1100 acres. To put this into perspective, the estimated coverage of the first two buildings will not surpass 25 acres. Thus, there is room for virtually unlimited physical expansion.

In addition, even though Texas is often thought of as a hot place, the climate in West Texas actually supports efficient free-air cooling. Its dry air combined with cooler nights and milder temperatures during the autumn, winter, and early spring enables the use of free-air cooling during certain periods, significantly improving overall energy efficiency.

Design highlights

The data center is designed to host NVIDIA’s GB200 NVL72s, specialized GPU pods that each contain 72 GPUs and 36 conventional CPUs. These innovative units provide unprecedented GPU densities, delivering substantial savings on rack space. However, there’s a tradeoff here. Such high processor densities require highly efficient cooling. For this reason, these pods have built-in internal liquid-cooling systems. The pods also require specialized rack layouts and power provisioning to accommodate the requirements of this innovative equipment.

The power draw expected over the data center's lifetime will not be achievable using the existing power grid. Specialized power systems and upgraded substations in the area will be required, in partnership with local utility providers. Smart power management and redundant power paths are part of the design of the grid to ensure availability during peak AI computer loads and more efficient power usage.

High-performance, low-latency network fabrics will also be used to support the massive east-west traffic generated by distributed training jobs. Custom high-speed optical interconnects between racks will be used for this purpose.

Broader implications

The Abilene project is more than a data center: it is a model of how infrastructure is evolving to meet the needs of a rapidly developing AI-driven digital economy. This model will contribute to progress in several key areas, including:

  • Strategic location planning: AI data centers may shift away from crowded tech hubs toward energy-abundant, land-rich areas.
  • Specialization: Traditional colocation models are being replaced by custom AI-ready facilities.
  • Economic impact: Projects like this mean new jobs, infrastructure investment, and long-term energy considerations to the communities they enter.

Conclusion

AI-specific data center design is still in its infancy. Even so, unprecedented progress has been made over the past few years, resulting in technological shifts beyond most predictions. This trend is expected to continue and to accelerate even more as AI increasingly becomes an integral part of business and of life.


You may also like:

From GPUs to megawatts: the new demands of AI on infrastructure

AI-driven network infrastructure: the future of UC systems

The transformative influence of AI in videoconferencing and UCaaS systems

 

Topics: Trends, AI, Case Study

Comments

Welcome to our Think Tank

In this blog you'll read our thoughts on business telephone systems. While a lot has changed in telecom since TeleDynamics was founded in 1981, we remain as committed as ever to delivering the best customer service in the industry.

If you would like elaboration on a specific topic, please let us know in the comments section.

Happy reading and thanks for stopping by!

Receive New Articles by Email

BiBA-2017-silver-midres
Easy template for creating a network security policy
New call-to-action
New call-to-action