Operators are concerned over the integration and interoperability problems they are facing because of the fragmented nature of network and software architectures. This complex situation is being aggravated by how they are migrating to cloud
In the telecoms industry, everything constantly changes, and yet some things remain the same. Technologies, standards, architectures, and practices all evolve, but because of this increasing multi-network and multi-vendor complexity, there remains a continuing need to get integration and interoperability right to ensure everything hangs together. And closely linked to that are acute concerns over who is actually best positioned to make this happen. So, it’s no surprise that these issues cropped up in many of the conversations and sessions I was involved in at the recent Amsterdam-based Network X conference. Network X is the new one-stop-shop that brings together two longstanding industry events—Broadband World Forum and 5G World—as well as a new event dedicated to Telco Cloud, takeaways from which form the basis of this article.
Integration and interoperability concerns
Multiple technologies and increasing network complexity are placing integration and orchestration issues top of mind. Several speakers at the event stressed the integration and interoperability problems they were facing because of the current fragmented nature of network and software architectures. And this is even before network disaggregation, and vendor diversification fully hit the industry.
The fragmented nature of the network environment and proprietary solutions not only increases service providers’ operational costs but can also make the networks more fragile and less future-proof. As Pierre-Marie Binvel, Director of Marketing Connectivity at Orange Business Services explained: “It’s understandable that companies continue to seek to embed additional value in their standards, but if market standardization is lacking, then it will directly impact on both costs and the shelf-life of their infrastructure.”
This complex situation is being aggravated by the migration to cloud. With service providers relying heavily on multiple network vendors and hyperscaler partners to support the shift to cloud, we are seeing the various parties employing a wide mix of tools and approaches. With so many building blocks in play, it becomes even more difficult to standardize or harmonize management and orchestration processes. Nor does it help that different vendors are using multiple versions of containerization technologies, such as Kubernetes. It is sometimes assumed that open source will help to address many of the issues inherent in proprietary solutions, but in the words of one Network X panelist: “The challenges associated with integrating open source are not trivial.”
We also heard a lot at Network X from Deutsche Telekom, Orange, Telenor, and other operators about the multicloud strategies they are pursuing, driven not just by their own need for flexibility but also by the multicloud requirements of their customers, who inevitably come to the relationship with their own ecosystems and sets of partners. This multicloud approach is another factor adding to the aforementioned complexity, creating a need for better orchestration and end-to-end performance management.
Paradoxically, as well as being a headache for service providers, the complexity could also provide them with an opportunity to position themselves as an integrator in their own right, providing value-added services that run over a wide range of complex environments. In the words of Orange Business Services’ Binvel: “We increasingly see ourselves as an orchestrator, a partner of trust.”
Hyperscalers are still the elephant in the room
For both vendors and operators, a shift to the cloud may be inescapable. But what mix of private, hybrid, and public cloud environments to adopt is less clear. There are many questions about which tools and versions of proprietary or open standards to employ and what mix of partners.
Service providers are clearly still trying to figure out the best way to work with the hyperscalers such as AWS, Microsoft Azure, and Google Cloud. What was apparent from the service provider conversations and sessions at the event was not so much outright skepticism about partnering with hyperscalers as uncertainty about the pros and cons of different approaches in different domains. This involves concerns over which cloud environments are most appropriate for which workloads, how to manage multicloud environments without increasing complexity and creating new vertical silos, as well as which mix of partners will deliver the best results.
There are also concerns over cloud provider lock-in, stemming from the technical difficulties and costs associated with shifting from one cloud provider to another. One important aspect of lock-in that concerns operators is the prospect of costs going up once they find themselves locked into a public cloud relationship, something that was specifically highlighted in one of the sessions by Nathan Rader, VP Cloudified Production at Deutsche Telekom. It is understandable to hear this message coming from DT, as larger operators have more opportunities for private cloud economies of scale than their smaller counterparts.
At the Network X event, there was certainly a skeptical vibe around the benefits of working too closely with the hyperscalers on the network side. It is understandable that innovators such as AT&T, Dish Network, or Rakuten receive the attention that they do, but that can lead us to forget that there is still an awful lot of industry uncertainty or reluctance to place network workloads in the public cloud. This was neatly summarized in the keynote address by Sampath Sowmyanarayan, CEO of Verizon Business when he made clear that Verizon would never put its core network in the public cloud because “large telcos should own their own destiny.” A view also mirrored by DT’s Rader, who said the operator did not plan to let hyperscalers host its network functions in their own facilities any time soon.
The regular surveys of global service providers that Omdia conducts tend to confirm that telcos are not rushing to place network functions in the public cloud. The latest Omdia Telco Cloud Evolution Survey, published in September, shows over half of respondents believe that the optimal place to run network functions is in the private cloud, a quarter favor a balance of public and private cloud, and only 12% believe the public cloud is the best choice.
The same survey also suggests that service providers are continuing to spread their bets, with around 60% of respondents using at least two public cloud providers and 30% using three or more. So, as mentioned earlier, we can expect to see increasing pressure to address cloud interconnectivity and deliver the network optimization required to support hybrid and multicloud.
Telco Cloud Evolution Survey - 2022 (September 2022)
Service Provider Digital Transformation and Cloud Strategy Survey – 2022 (December 2021)
Kris Szaniawski, Research Director, Access, Software & Transformation