The AI-Native Telco

Nvidia plots AI grids with operators, partners

By Ray Le Maistre

Mar 18, 2026

  • Nvidia started its network operator courtship with AI-RAN
  • Now it has expanded its proposition with an AI grid architecture
  • It is being put to the test by the likes of AT&T, Comcast, Charter Communications, T-Mobile US and Indonesia’s IOH
  • Tech partners, including Cisco and HPE, are also on board

Nvidia is determined to provide the technology foundations for the next generation of telco edge/access deployments. Its opening gambit was AI-RAN, whereby radio access network baseband functions would run on GPUs, a proposal that is slowly gaining traction with the mobile network operator community. 

Now, during its GTC (GPU technology conference) event, Nvidia has broadened its pitch and teamed up with an initial six network operators, mostly in the US, to propose an architecture it calls AI grids – distributed AI infrastructure deployments that make use of any kind of network node, whether that is part of a fixed, wireless or content delivery network. 

Nvidia is very clear about the important role that, it believes, telecom infrastructure will play in the AI era, describing it in this announcement as “the next frontier for distributing AI”.  

The network operators already on board with the AI grid concept are: US telcos AT&T and T-Mobile US – which is already heavily involved in AI-RAN developments, as we reported earlier this week; US cable network operators Comcast and Charter Communications (which presents itself to customers using the Spectrum brand); global consent delivery network (CDN) giant Akamai; and progressive Asian telco Indosat Ooredoo Hutchison (IOH), another AI-RAN advocate that has also long been developing AI factory infrastructure in Indonesia in partnership with Nvidia (see Indosat taps Nvidia for AI muscle). 

They are developing AI grids in ways that make sense for their existing network architectures and for their business needs, with some “starting by lighting up existing wired edge sites as AI grids they can monetise today” and others starting with their existing AI-RAN and/or AI factory deployments. The key, of course, is that the individual sites/nodes only act as grids if they are linked by high-speed data connections, something all these network companies have in their armoury. 

These network operators also have access to that other precious commodity – power. “Telcos and distributed cloud providers [such as Akamai] run some of the most expansive infrastructure in the world: About 100,000 distributed network datacentres worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new AI capacity over time,” notes Nvidia. And that’s a lot of capacity if it were to be fully utilised.  

And here’s the nub of Nvidia’s pitch: “AI grids turn this existing real-estate, power and connectivity into a geographically distributed computing platform that runs AI inference closer to users, devices and data, where response and cost per token align best. This is more than an infrastructure upgrade – it’s a structural change in how AI is delivered, putting telecom networks at the centre of scaling AI rather than just carrying its traffic.” 

That sounds great, but of course to make it stack up financially there needs to be pent up demand from customers (enterprises, governments, even individuals) that are prepared to pay for AI inference, and that business case is yet to be proven. 

That said, this looks (at least to this editor) like a very meaningful step forward for Nvidia’s engagement with the network operator community, as it addresses more opportunities, use cases and opportunities for collaboration than AI-RAN as a standalone option.

And as usual, Nvidia is bringing dedicated tech and plenty of partners to the table. It has developed an AI Grid Reference Design that defines the tech building blocks for deploying and orchestrating AI across distributed sites, while the likes of Cisco, HPE (which has unveiled its HPE AI Grid platform), Armada, Rafay and Spectro Cloud offer supporting systems for building the grids and managing and orchestrating the AI workloads. 

So what are the network operators doing? 

AT&T

AT&T, which boasts more than 100 million internet of things (IoT) connections, has teamed up with Nvidia and Cisco to build an AI grid for IoT. “By running AI on a dedicated IoT core and moving AI inference closer to where data is created, AT&T can support mission‑critical, real‑time applications like public‑safety use cases with Linker Vision” – an AI software startup in which Nvidia is an investor – “enabling faster detection, alerting and response while helping keep sensitive information under customer control at the network edge,” notes Nvidia. 

The approach combines AT&T’s dedicated IoT core with Cisco’s Mobility Services Platform – to “support localised traffic breakout, deterministic performance, and zero-trust security for regulated and critical use cases” – and the associated Cisco AI Grid with Nvidia, which Cisco describes in this blog as a “full-stack AI architecture that enables service providers to deliver real-time AI inferencing services across distributed networks”. This combination “provides a highly secure, end-to-end pathway from edge devices across the AT&T network, and into Nvidia accelerated compute”, according to AT&T. “This network-driven approach reduces complexity, improves performance, and provides the operational scale required to move edge AI from pilots into full production,” added the telco. 

Shawn Hakl, senior VP of product at AT&T Business, who recently discussed the impact of AI on the communications sector during a recent TelecomTV interview,  stated: “Scaling AI services that are both highly secure and accessible for enterprises and developers is a core pillar of our IoT connectivity strategy. By combining AT&T’s business‑grade connectivity, localised AI compute and zero‑trust security while working with members of the Nvidia Inception programme and harnessing Cisco’s AI Grid with Nvidia infrastructure and Cisco Mobility Services Platform, we’re bringing real‑time AI inference closer to where data is generated – accelerating digital transformation and unlocking new business opportunities.”

The partners recently completed a successful public-safety use case demonstration at AT&T Discovery District in Dallas, showcasing Linker Vision’s Physical AI and Reasoning AI Platform powered by Cisco’s pilot platform, Nvidia GPUs, and AT&T Video Intelligence.

Masum Mir, senior VP and general manager for provider mobility at Cisco, stated: “Physical AI is accelerating the shift from centralised intelligence to distributed decision-making at the network edge. Our partnership with Nvidia brings together the full stack – from Nvidia GPUs to Cisco’s networking and mobility capabilities – enabling operators to power mission-critical applications, deliver real-time inferencing and participate in the AI value chain.”

T-Mobile US

T-Mobile US and Nvidia announced their latest AI-RAN-enabled collaboration earlier this week – see Nvidia, T-Mobile share AI-RAN visionthough didn’t, at the time, use the term ‘AI grid’ to describe what they are developing. Nvidia notes that “developers including Linker Vision, Levatas, Vaidio, Archetype AI and Serve Robotics are already piloting smart‑city, industrial and retail applications on the grid, connecting cameras, delivery robots and city‑scale agents to real-time intelligence on the network edge.” 

Comcast

Comcast is making use of its cable broadband network architecture to develop its own AI grid, working with Nvidia, AI model developer Decart, small language model developer Personal AI and HPE. (You can find out more about Decart, Personal AI and Linker Vision and the roles they play in the AI grid in this Nvidia blog.)

The cable services giant aims to “bring AI processing, using Nvidia GPUs, closer to customers than ever before to accelerate the development of next-generation AI applications across America,” it noted in this announcement

The operator is running a field trial that “takes advantage of Comcast’s nationwide, deeply distributed architecture that reaches 65 million homes and businesses and is purpose-built for low-latency, high-bandwidth performance.” It aims to “show how running AI at the network edge can unlock faster, smarter, more responsive experiences. For consumers and businesses, that translates to quicker apps, more relevant recommendations, smoother gaming, and AI-powered tools that respond instantly.” 

Specifically, Comcast notes that its network “is designed to put more computing power physically closer to customers, creating one of the largest and most capable platforms in the US for delivering real-time AI inference with significantly reduced latency, power consumption, and cost. With advanced DOCSIS 4.0 FDX [full duplex] nodes, smart amplifiers, and intelligent gateways across its footprint, Comcast can support real-time AI inference at scale – something traditional centralised, fibre-only, or wireless networks cannot match,” claims the cable firm. “As more AI workloads move from distant datacentres to local edge locations, Comcast’s architecture positions the company as a key contributor to the emerging AI Grid,” it adds.

Charter/Spectrum

Charter/Spectrum is targeting a specific use case in its trial, which is making use of its edge compute infrastructure (ECI) that “positions hundreds of megawatts of power… less than 10 milliseconds away from 500 million devices in homes and businesses,” the operator noted in this announcement

Spectrum is to render high-resolution graphics for media production using remote GPUs (RTX6000 PRO Blackwell Server Edition) embedded across its fibre broadband network. “The solution enables animation artists to render blockbuster-level CGI with GPU compute resources located nearby at the edge of Spectrum’s fibre-powered broadband network. The proximity of Spectrum’s ECI to studios, coupled with 100 Gbit/s low-latency fibre network, extends the power of the Nvidia AI Grid to remote workstations,” it added. 

Akamai

CDN giant Akamai is enhancing and expanding its globally distributed AI grid, the Akamai Inference Cloud, across more than 4,400 edge locations across the world with thousands of Nvidia RTX PRO 6000 Blackwell Server Edition GPUs.

By integrating Nvidia AI infrastructure into Akamai’s infrastructure, and leveraging intelligent workload orchestration across its network, Akamai intends to move the industry beyond isolated AI factories toward a unified, distributed grid for AI inference,” the company noted in this announcement

Adam Karon, chief operating officer and general manager of theCloud Technology Group at Akamai, noted: “AI factories have been purpose-built for training and frontier model workloads – and centralised infrastructure will continue to deliver the best tokenomics for those use cases. But real-time video, physical AI and highly concurrent personalised experiences demand inference at the point of contact, not a round trip to a centralised cluster. Our AI grid intelligent orchestration gives AI factories a way to scale inference outward — leveraging the same distributed architecture that revolutionised content delivery to route AI workloads across 4,400 locations, at the right cost, at the right time.”

Indosat Ooredoo Hutchison (IOH)

Indosat Ooredoo Hutchison is connecting its sovereign AI factory in Indonesia with distributed edge and AI‑RAN sites across the country to build an AI grid for local innovation. By running Sahabat-AI – a Bahasa Indonesia-based platform – on this grid within Indonesia’s borders, IOH can “bring localised AI services closer to hundreds [of] millions of Indonesians across thousands of islands, giving local developers and startups a sovereign platform to build AI applications that are fast, culturally relevant and compliant by design,” noted Nvidia. 

- Ray Le Maistre, Editorial Director, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.

Subscribe

Cookies

TelecomTV uses cookies and third-party tools to provide functionality, personalise your visit, monitor and improve our content, and show relevant adverts.