According to Omdia’s Global Telecoms Opex Tracker, telecom operators spend around 3% of their opex on network utilities, mainly electricity. This expenditure is likely to grow as telcos add AI infrastructure to their existing cloud footprint. Optimizing energy consumption across central, regional, and edge infrastructure is essential. Cloud native architecture offers multiple energy saving mechanisms that can be employed at workload, cloud software, and physical infrastructure levels.

 

Omdia view

Summary

According to Omdia’s Global Telecoms Opex Tracker, telecom operators spend around 3% of their operational expenditures (opex) on network utilities, mainly electricity. This expenditure is likely to grow as telcos add AI infrastructure to their existing cloud footprint.

Optimizing energy consumption across central, regional, and edge infrastructure is essential. Cloud native architecture offers multiple energy-saving mechanisms that can be employed at the workload, cloud software, and physical infrastructure levels. Efficient workload scheduling, granular management of resources (central processing unit (CPU), data processing units (DPUs), and graphics processing units (GPUs)), as well as the use of offload or acceleration hardware, can help telcos gain additional improvements on top of those delivered with the latest generation of silicon. However, the complexities of cloud native networks make the path to energy efficiency more challenging, requiring a nuanced approach that balances technological innovation with operational feasibility.

Operator spending on network utilities has been rising

Energy efficiency has emerged as a critical priority for telecom operators worldwide, driven by the dual imperatives of reducing opex and meeting sustainability goals. According to Omdia’s Global Telecoms Opex Tracker, telecom operators spent $38.2bn on network utilities in 2024, a 40% increase from 2019. This surge in energy consumption is largely attributed to the growing demands of modern telecom networks, including the deployment of 5G infrastructure and the integration of AI/ML workloads. With the radio access network (RAN) accounting for approximately 70–75% of the overall energy consumption for mobile network operators (MNOs), and telco core networks and data centers contributing another 20–25%, the need for energy optimization across the entire network footprint is undeniable.

Cloud infrastructure has not been optimized for efficient power usage

Underutilization of CPU resources is a defining characteristic of telco cloud infrastructure, with typical utilization rates significantly lower than those observed in IT cloud environments. For instance, BT reports that their cloud infrastructure utilization for network workloads, including cloud-native network functions (CNFs) and virtual network functions (VNFs), stands at just 40%. Several factors contribute to this inefficiency, with overprovisioning of resources being a primary cause. To ensure high redundancy, CNFs are often deployed in N+1 or N+2 redundancy configurations, leaving many resources stranded and underutilized, particularly during off-peak hours.

A more pressing challenge, however, is workload fragmentation. In telco cloud environments, workloads are pre-allocated, meaning CNFs are assigned fixed or reserved CPU and memory resources that align with their peak capacity requirements. Additionally, performance constraints and quality of service (QoS) guarantees necessitate predictable execution for certain workloads, such as user plane functions, which often require isolated cores. These pinned workloads lead to rapid fragmentation of nodes, restricting the flexibility of the cloud infrastructure for dynamic workload scheduling. As a result, consolidating workloads onto fewer nodes becomes increasingly difficult.

Energy consumption is another critical consideration. In a typical data center, servers and IT infrastructure can account for up to 40% of total power usage. CPUs alone consume up to 60% of the power supplied to a server. This underscores the importance of efficient workload scheduling and CPU core management in telco cloud environments to optimize energy consumption and improve overall resource utilization.

Cloud native architecture offers multiple avenues to lower energy consumption

In a recent collaboration with Intel, Nokia successfully validated its user plane function (UPF) and control plane functions, such as the Session Management Function (SMF), on Intel’s sixth-generation Xeon processor, Sierra Forest. These processors are built on Intel’s efficient-core (E-core) architecture, which offers a high core density of up to 288 cores, a significant improvement over the existing performance-core (P-core) architecture.

Nokia’s experience with these high-performance processors demonstrated a substantial reduction in the number of servers required to support the same traffic levels as older-generation processors. This reduction creates opportunities for telcos to consolidate workloads onto fewer servers, thereby reducing the power consumption of their telco cloud infrastructure. Additionally, the new processors are designed to deliver improved performance per watt, consuming less power while maintaining the same output.

The latest processors also come with advanced software features, such as Intel’s Infrastructure Power Manager (IPM), which further enhances CPU power performance. These features are critical for telcos aiming to optimize energy efficiency in their cloud environments.

To maximize the utilization of compute and storage resources, telcos must adopt advanced workload scheduling and orchestration techniques. Kubernetes, for example, supports late binding of pods to resources (e.g., compute, storage, FPGAs, GPUs), which reduces idle costs and allows for packing more CNFs per node. Many container-as-a-service (CaaS) platforms, such as VMware’s Telco Cloud Platform, already offer this capability. Similarly, OpenStack-based telco cloud platforms can leverage open source projects like Watcher and Ironic to consolidate workloads and manage server power states effectively.

However, understanding actual core utilization can be challenging for certain network workloads, such as UPF, which rely on the Data Plane Development Kit (DPDK) for accelerated packet processing. Since UPF workloads are often pinned to specific cores for performance reasons, dynamic scheduling mechanisms are typically not applied. To address this, ZTE employs an external AI/ML-based load prediction mechanism to improve energy savings for UPF workloads. This approach uses AI/ML models to predict network traffic supported by the UPF and determines the number of cores required. The information is then fed into the CaaS platform to consolidate workloads and manage the power states of both allocated and unallocated cores.

The design of CNFs as microservices also plays a pivotal role in improving resource efficiency. Microservices enable granular resource allocation and independent scaling. Nokia’s experience highlights that their modular, microservices-based network functions were well-suited to Intel’s Sierra Forest E-core architecture, enabling efficient horizontal scaling and better utilization of resources.

However, complexities demand balancing technological innovation with operational feasibility

The range of energy-saving techniques employed in cloud infrastructure varies based on the nature of workloads supported by each node. Telco core data centers are designed to host diverse application types, each with distinct performance expectations for compute, networking, and storage resources. Telcos must carefully balance their energy reduction initiatives with the need to uphold network performance and meet service level agreements (SLAs). For instance, signaling workloads require real-time or near-real-time responsiveness from the infrastructure, while user plane workloads demand high packet processing capabilities. In contrast, applications such as operations support systems (OSS) and business support systems (BSS) generally have less stringent latency and performance requirements.

Modern processors offer advanced power management features, such as dynamic modulation of CPU C-states and P-states, which can significantly reduce energy consumption. However, the implementation of these features in telco networks is often constrained by latency-sensitive applications, where even minor delays can impact service quality.

The introduction of DPUs and GPUs into cloud infrastructure further complicates energy reporting and optimization. Unlike CPUs, GPU infrastructure is still evolving in terms of multi-tenancy capabilities, which limits its ability to efficiently share resources across workloads. This adds another layer of complexity to energy management in telco cloud environments, requiring telcos to adopt tailored strategies that account for the unique demands of their workloads while optimizing energy usage.

Appendix

Further reading

Energy Efficiency in Cloud Native Networks (January 2026)

Author

Inderpreet Kaur, Senior Analyst, Telco Cloud and Network Automation

[email protected]