Analyst Insights: How Can NaaS Support Telecom Operators’ GPUaaS and AI Connectivity Ambitions?

on Posted on Reading Time: 5 minutes

Spotlight: Gorkem Yigit, Research Director, Analysys Mason

The rising adoption of AI is fuelling the demand among enterprises and consumers for GPU-based (Graphics Processing Unit) infrastructure to support model training, fine-tuning and inference workloads. To meet this need, enterprises are increasingly turning to GPU-as-a-service (GPUaaS) solutions. These solutions can provide cost-effective, scalable access to AI infrastructure while reducing deployment times and avoiding the complexities of owning and operating this infrastructure in house.

Analysys Mason estimates that the GPUaaS market, which spans hyperscale cloud providers, alternative cloud providers (including neoclouds and Tier 2 cloud providers, and telecoms operators) will grow from USD21 billion in 2024 to USD134 billion in 2030. (For more information, see Analysys Mason’s GPU-as-a-service (GPUaaS): worldwide forecast 2025–2030. A neocloud is a new entrant in the cloud services market that focuses on AI and GPUaaS offerings rather than traditional cloud computing. Neoclouds provide specialised infrastructure for training and inference workloads.)

GPUaaS is driving diversification in the cloud market.

AI is reshaping competition in the public cloud market, particularly in the infrastructure-as-a-service (IaaS) layer, as GPUs and other AI accelerators become integral to those infrastructure enterprises that are looking to outsource.

Hyperscale cloud providers, such as AWS, Google Cloud, Microsoft Azure and Oracle, have successfully executed strategies to secure their status as market leaders. The critical feature of these strategies is the speed with which these companies moved to adapt their services for AI workloads. Their positions were further strengthened by the breadth of their AI application and platform portfolios, combined with the data gravity1 and customer ‘stickiness’ from existing workloads.

The hyperscalers are facing increased competition from the array of new GPU-based cloud infrastructure solutions that have emerged in recent years in response to AI-led demand. This proliferation of ‘neoclouds’ (including Coreweave, Crusoe, Lambda, Nebius and Nscale) is gathering pace alongside renewed efforts from telecoms operators that are pursuing a second wave of entry into the cloud market. Together, these developments present a growing challenge to market dominance of hyperscale cloud providers.

Neoclouds are expected to capture a substantial share (20–30%) of the GPUaaS market by 2027, as they rapidly deploy AI infrastructure and offer it to customers at highly competitive prices (often well below those of hyperscale cloud providers). Their ability to undercut competitors stems from building cost-efficient, GPU-optimised infrastructure from scratch, unencumbered by the need to maintain legacy systems and services. While the sustainability of many neoclouds’ business models remains uncertain, and market consolidation appears inevitable, a number of players are likely to emerge as established competitors alongside hyperscale cloud providers.

Telecoms operators are also looking to capitalise on the GPUaaS opportunity. As of 3Q 2025, around 30 operators have launched GPUaaS offerings, though most are still at an early stage and relatively small in scale. Few operators are willing or able to compete directly on cost or scale. Instead, they are positioning themselves to meet specific enterprise requirements, with sovereignty at the top of the list, and while bundling AI applications, professional services and connectivity capabilities that pure-play GPUaaS providers lack. To succeed, operators need to take advantage of several differentiators that they have, including their roles as trusted domestic partners, their experience in operating critical infrastructure and their ability to deliver SLA-backed connectivity with consistent quality and latency to serve trust-sensitive verticals such as banking, healthcare and defence.

NaaS has the potential to help operators maximize their roles in GPUaaS and AI connectivity.

Operators have initially benefitted from AI-driven demand for dark fibre and managed optical fibre connectivity for data-centre interconnect (DCI), particularly from hyperscale cloud providers that need to transfer large datasets between facilities for AI model training. This trend has been most pronounced in the US wholesale market, but there is now a growing focus on data sovereignty in Europe, Asia–Pacific and the Middle East. The need for data sovereignty, coupled with the traffic shift from AI training to inferencing, is driving a wave of distributed data-centre deployments that require high-performance connectivity across regions, enterprise edge sites and other clouds. This challenge calls for NaaS models that provide programmable, scalable and cost-effective interconnection fabric as well as more automated wholesale processes.

NaaS also presents opportunities for operators that are seeking to become GPUaaS players and to strengthen their position as AI connectivity providers for enterprises. This stems from the fact that AI workloads generate traffic patterns and requirements that differ from standard enterprise or web applications, such as:

  • short bursts of extremely high-throughput demand over short periods, typically for dataset transfers in training and fine-tuning
  • heightened requirements for low-latency inferencing, quality-of-service (QoS) controls, redundancy and robust security, especially as AI models and workloads become more distributed
  • complex, dynamic data flows generated by systems composed of multiple AI agents.

Operators can address these enterprise needs by offering NaaS solutions that provide agile and flexible connectivity that is capable of supporting bursty, intermittent and high-bandwidth AI traffic flows, as well as seamless networking across multi-cloud environments. These solutions should support dynamic workload placement and data mobility between hyperscale cloud providers, neoclouds, sovereign clouds and on-premises/edge locations. They should also embed intelligent, programmable routing mechanisms that adapt to workload requirements (for example, recognising that the traditional shortest-path approach may not always be optimal for business objectives and instead allowing flexible routing decisions based on sovereignty, security, sustainability or other constraints).

At the same time, NaaS offerings must integrate advanced security (potentially including emerging approaches such as quantum-based security) and policy enforcement capabilities to safeguard sensitive AI data flows and to maintain compliance across jurisdictions. Ultimately, enterprises will want networking partners that are capable of matching the agility and responsiveness required to support their continuously changing AI workload patterns.

Operators can extend these NaaS foundations into their GPUaaS offerings by bundling compute capacity and connectivity to provide integrated performance, availability and security tailored for AI workloads. By doing so, they can differentiate themselves from pure-play providers and deliver stickier services with the assurances required for mission-critical AI applications. Even those operators that are not providing GPUaaS directly can strengthen their roles in the AI value chain by partnering with neoclouds to meet enterprise connectivity expectations and to facilitate access to distributed AI infrastructure.

Overall, as AI drives closer integration between compute and connectivity, operators that combine GPUaaS and NaaS in cohesive, value-differentiated propositions will play an increasingly central role in shaping the architecture and economics of the AI ecosystem.

Learn More

Tags:

Gorkem Yigit

Research Director | Analysys Mason

Gorkem is a Research Director within the Networks and Cloud research practice at Analysys Mason. He is the lead analyst for the “Cloud and AI Infrastructure” and “NaaS Platforms and Infrastructure” research programmes. His research focuses on the building blocks, architecture and adoption of the cloud and AI native, disaggregated and open networks that underpin the delivery of 5G, multi-cloud and NaaS services and enable operational automation.

He also works on a range of consulting engagements with telecoms and IT vendors, operators and financial institutions including strategy assessment and advisory, TCO/business case analysis and marketing support through thought leadership collateral.