To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/LjekshNj8pk?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:05):
Hello, you are watching TelecomTV. I'm Guy Daniels. How is cloud native adoption? Transforming telecoms from overcoming barriers and proving real business impact to shaping the next wave of innovation through convergence with enterprise it. And what is the role of AI Edge and next generation services in creating new opportunities and revenue streams for network operators? Well, joining me now to answer these questions is Paul Miller, CTO of Wind River. Paul, really good to talk with you again. Let's start by looking at Cloudnative adoption and its business impact because we know Cloud native can deliver substantial benefits to telcos, but what do you see as the main barriers to adoption and how can these be overcome?
Paul Miller, Wind River (00:58):
What's interesting, we've over the past decade or so, remember Kubernetes came to the surface, containers came to the surface about 2014. We're over 10 years deep in this technology. We find our customers actually quite expert in this area. The complexity comes from integration of multi-vendor solutions, right? Bringing in the application layer, the infrastructure layer, the hardware, et cetera, and the hard work that goes behind that is really the thing that scares operators from adopting cloud native. Not anything about cloud native technology itself. The benefits are there, but there's a lot of fear with respect to that integration effort. Fortunately, the industry has done a lot of work now to integrate outside of the operator environment so the service providers can have an easier adoption for the technology.
Guy Daniels, TelecomTV (01:42):
That's good to hear, Paul. So can you perhaps share any examples where cloud native deployments have already delivered measurable operational or business success?
Paul Miller, Wind River (01:53):
Yeah, absolutely. So we've seen, and many know in the industry the incredible deployment at Verizon that we have the National 5G deployment there, leveraging V ran and open RAN approaches, fully virtualized, 100% cloud native, fully container-based across the entire United States. And that deployment, Verizon executives have publicly stated in interviews the fact that it's equal to or superior to traditional RAN performance. And so they're quite happy with the benefits that they're seeing from deploying a cloud ran technology. And of course we've seen similar things with Boost Networks and Vodafone in the UK and many other service providers that we work with. So we think we're well past the point of maturity here and it's proven, its TCO and proven its operational benefits.
Guy Daniels, TelecomTV (02:39):
So if we've proven benefits already, if you can then look ahead, how do you see the convergence of telecom, cloud and enterprise IT shaping the next wave of telco innovation and where does Wind River play in this evolution?
Paul Miller, Wind River (02:56):
Yeah, it's a pretty interesting question and it talks to the evolution over time as to what's happened with cloud technology in the service provider industry. As everybody knows, if you go back many years, we had the transition from proprietary appliances to NFV network functions virtualization, and that was primarily adopted in the core of carrier networks years ago by and then virtual ran and open ran start launching. And given the time that had gone by, that started out of the gate as a cloud native solution as we've talked about. So we have a lot of deployments there. The thing that's starting to change that we're seeing globally as we talk to operators is they don't like the idea of managing multiple types of clouds. Obviously for each type of cloud, they have to train their personnel on how to operationalize, deploy, and manage these systems. And they prefer whether it is at the farge for ran near Edge core IT enterprise, O-S-S-B-S-S, back office, all of these virtualization systems, they prefer them converge into a single technology. So a technology that can best do that, that can operate efficiently at the edge, at highly distributed scale, but still support the enterprise class workloads and the core of the network is really attractive to them as they look at their next generation. I think we'll see a single cloud technology converging core to edge.
Guy Daniels, TelecomTV (04:11):
Well, let's build on that then Paul, and talk about the differentiation and what Wind River brings to telcos because the company has built its reputation, starting from the edge and moving towards the core. Why is that pathway different and more effective than starting from the core and then trying to extend into the far edge?
Paul Miller, Wind River (04:31):
Yes, that's true. Certainly Wind River in the scope of vendors providing cloud technology was quite late to market. We've been in the market for about seven to 10 years now, and we started our high scale deployments in virtual ran, which transitioned naturally to open, ran at very, very high volume. And the interesting challenge there as you look at a virtual ran deployment is the hardware acceleration, the low latency, the high performance, the low footprint. There's some really unique requirements there. And supporting those hardware accelerators, providing the ability to manage those things, providing a low latency kernel, the ability to sit on that platform and run a minimal number of cores, like a single core of overhead for the entire virtualization solution. These are things that drove us to an extremely efficient, cost-effective solution. Now as we move back towards the core and we look at deploying there, and we've got several wins in that space, the workloads are actually much easier.
(05:27):
You go from having a highly distributed architecture, one that's more centralized, you have less specialized hardware accelerators, and you're not as sensitive to the high latency. So we're finding that what we built for the far edge is it's much easier to move to the core. Our competitors though, however, as they've built for the core first in IT enterprise, and they try to move now to the edge and try to run low latency or specialized hardware, accelerated workloads, they're having a really hard time with that because that requires a pretty significant knowledge base in real-time systems in order to do that efficiently. So that's why we're finding our movement from edge to core to work pretty well.
Guy Daniels, TelecomTV (06:05):
And as you've said, wind River's transition to open ran, and you've achieved some significant wins in these with these deployments. What lessons from these projects best illustrate your strengths in real time performance, low footprint, high efficiency, and of course automation?
Paul Miller, Wind River (06:24):
Yeah, I'd say that over time we started with the CAS layer, the core cloud native infrastructure and hardening that. And that took quite a few years to get to the point where you could deploy a geo distributed V ran or open ran network and the tens of thousands of nodes that you need to do for a service provider. All geo distributed, all low latency, and that's where our early investments were. As that product though, became much more stable in the service provider networks. Our more recent years have been spent around operational challenges, right? The service providers really want this cloud native approach to be equal or superior, ideally superior to the legacy approaches they used to use. And that means the ability to upgrade in the field and how many sites can you touch within a maintenance window, is that equal to or better than the traditional RAN approach or the traditional telco core approaches where these systems were more vertically integrated from a single provider?
(07:20):
And the vertical integration obviously means that things are higher speed because it's one vendor controlling the entire upgrade process. That's more challenging when you have a distributed architecture with multiple vendors in play. And so we've started building over the past three to four years, a lot of operational tools, orchestration, automation, analytics, these type of things, and fully integrated them into the stack as we've learned through live deployments with our customers to make sure we have all the tooling so that in day two operations we're equally as effective as we are in the initial technology selection.
Guy Daniels, TelecomTV (07:53):
Well, can we talk more about analytics now? And I'd like to ask you how your platform stands apart when it comes to integrating analytics and observability, orchestration and automation into telco grade Kubernetes stacks?
Paul Miller, Wind River (08:08):
Yeah, sure. So it's kind of interesting the different vendors approaches to this. We don't take a kit apart approach with our technology stack. We integrate Linux with the cloud technology. It's not like you deploy Linux and then deploy that virtualization platform on top of that and then deploy applications. It's a fully integrated stack. And so when our customers consume this technology, they know it's continuously integrated with the base technology that they're deploying. In this case, Linux and Kubernetes. And then as you mentioned with analytics and orchestration and automation, these are also fully integrated and continuously regression tested together as a full suite of products. So this means that that complete stack is fully functional and hardened as it's delivered to the customer. These things aren't on separate or separate business priorities for our business, Linux versus virtualization versus analytics and orchestration. They're a unified offering. And even in our open source initiative, Starling X, which is hosted in the same community where OpenStack has built the open infrastructure foundation, all of Linux, all of Kubernetes, all of OpenStack, the entire thing is one vertically integrated solution. And that has been really attractive for the service provider market. They're getting the best of open source, but getting a more bespoke stack that's fully integrated, which is more what a telco service provider expects.
Guy Daniels, TelecomTV (09:30):
Yeah, absolutely. Thank you, Paul. Well, I think we should finish by discussing AI and new opportunities because we know AI is rapidly becoming integral to telco transformation. So should operators integrate AI initiatives and network ai, AI ran efforts with their cloud native journey? Or is AI and cloud native are these two separate tracks?
Paul Miller, Wind River (09:54):
I think they're intertwined, right? Obviously pretty much any AI initiative is going to be container-based as current technology is built in that type of virtualization platform. They're not critically dependent on each other though. And what I would say is that if you look at the AI ran alliance and AI ran initiatives, the idea of doing things like dynamic beam forming and continuous power management and being agile with the infrastructure that you have, that's what AI is really about from a far edge perspective. And we see that emerging and that will follow a natural technology curve of adoption there. None of it's deployed today, but as you can see from the RAND providers that are out there and Major Rand providers are doing a lot of activity in this space, I fully expect that to be adopted and deployed. And our infrastructure of course supports that.
(10:40):
The thing that's a bit more exciting for us is we've seen the impact of AI on operations and operations automation. We've even done demonstrations on telecom tv where we take a agentic implementation with a large language model and show how you can automate API control of all these deployed systems, that ability to deploy AI not only at the far edge for AI ran or AI applications and hosting them, but also to use it operationally to significantly reduce opex for running these networks and improve the ability to debug problems. That's got us really excited and we see that that will be adopted before the AI ran type applications.
Guy Daniels, TelecomTV (11:18):
That's interesting. Now we also know that Edge AI has a lot of promise. So what does Edge AI mean for telcos in practice, and how do innovations GPUs as a service or agent AI for API automation, how do these change the operator's ability to run and perhaps more importantly monetize their networks?
Paul Miller, Wind River (11:39):
Yeah, well certainly the technology stack changes, obviously what we had before is really an Intel a MD or ARM-based compute platform that we're running Linux and a virtualization technology on now with ai, pretty much any AI or generative AI application requires GPU or AI accelerators to be present on the system and to be integrated. So we have to be able to do that at high scale to support those type of specialized silicon platforms integrated with the cloud technology. GPU as a service tends to enable the service provider to post that infrastructure and run multiple applications on that GPU with things like pum, macor pinning or GPU time slicing, these sort of things that become features in the platform that really enable the service provider to monetize that platform from an AI perspective. The other thing that it happens is on the layer, immediately above the virtualization layer, you start needing to support new open source components. Things like TensorFlow, tiny ml, these type of application and enablers are part of the software suite that's necessary to deploy on the platform in addition to the GPU accelerator support to really support the applications that the service providers are bringing to market.
Guy Daniels, TelecomTV (12:57):
And we're also hearing now more about sovereign cloud requirements in telecom. How do you believe the industry can help operators meet these digital sovereignty needs while still benefiting from open cloud native platforms? Yeah, I think we're well
Paul Miller, Wind River (13:13):
Positioned for that. A lot of the technology we've built, and when we use the term cloud, a lot of people not skilled in the art yet immediately associate that with public cloud, right? Things like AWS and whatnot. That's not what we're talking about here. Of course, as we talk about cloud native on this program, we're really talking about private cloud instances that are deployed in controlled environments that the operator owns. They deploy and maintain that infrastructure. As a private cloud, that's very much what a sovereign cloud is, right? It's an air gapped environment that allows a particular entity that's concerned about security or data isolation, this sort of thing, to run a cloud infrastructure to host their applications for their own purpose, but one that's not shipping data off to a public cloud infrastructure. And so the type of technology we're talking about on the show and all the vendors provide is really for that. Now, certain attributes of that technology, the ability to support multiple sites within a single pane of glass to ensure data isolation and traceability, GDPR compliance for example, in the uk, these sort of things are additive capabilities that really make a technology platform ideally suited to sovereign cloud. And as a business, we're very focused on those attributes.
Guy Daniels, TelecomTV (14:25):
Thanks Paul. And finally, I'd just like to cover an area that's generating a lot of interest at the moment, and that's the automotive sector and providing services for that vertical. How can advanced services such as OTA updates, cellular V two X or compute offload, how can these open up new revenue streams for operators?
Paul Miller, Wind River (14:46):
Yeah, it's a pretty interesting area and it's really an example of a large customer problem statement, which is really across the entire globe. As we deploy 5G, we find that most of our customers are quite unhappy with the monetization of 5G network. Slicing hasn't really emerged as a 5G SA capability globally. There isn't monetization beyond the typical consumer device. Alright, so we've got higher bandwidth and lower latency, but where's the revenue that I was supposed to be able to build with 5G? And that's very commonly stated to us. One of the interesting things about our business is we're in many billions of devices. Our Wind Rivers technology is used in aerospace and defense and medical and industrial and automotive as you mentioned. And so the question is how do we take those devices that are now being connected to the service provider network and provide paths of great value that the service provider can enable that allows them to further monetize their network?
(15:43):
And the examples you provided OTA and CV two X or cellular vehicle to anything are examples where the network slicing and the ability for the service provider to create connectivity value between these devices represents new revenue stream potential, right? So cellular vehicle to anything enables vehicles to perform accident avoidance, not just from their own internal compute algorithms, but by sharing for example, geodetic location and velocity between the vehicles so that the a s self-driving algorithms can be modified to improve user safety in the vehicle. So there's a lot of exciting potential there for comfort and convenience and safety in this sort of thing that the service provider is intricately linked with. And that provides a path for a new revenue generation. And we see this also with private 5G and industrial and manufacturing in many areas. And we'd certainly encourage anyone to come talk to us. We have a lot of expertise for those edge systems.
Guy Daniels, TelecomTV (16:39):
Great. Hopefully a lot of opportunities there for telcos. We must leave it there though, Paul, as always, it's great talking with you and thanks so much for sharing your views with us today.
Paul Miller, Wind River (16:47):
Thanks Guy. Great to be here.
Hello, you are watching TelecomTV. I'm Guy Daniels. How is cloud native adoption? Transforming telecoms from overcoming barriers and proving real business impact to shaping the next wave of innovation through convergence with enterprise it. And what is the role of AI Edge and next generation services in creating new opportunities and revenue streams for network operators? Well, joining me now to answer these questions is Paul Miller, CTO of Wind River. Paul, really good to talk with you again. Let's start by looking at Cloudnative adoption and its business impact because we know Cloud native can deliver substantial benefits to telcos, but what do you see as the main barriers to adoption and how can these be overcome?
Paul Miller, Wind River (00:58):
What's interesting, we've over the past decade or so, remember Kubernetes came to the surface, containers came to the surface about 2014. We're over 10 years deep in this technology. We find our customers actually quite expert in this area. The complexity comes from integration of multi-vendor solutions, right? Bringing in the application layer, the infrastructure layer, the hardware, et cetera, and the hard work that goes behind that is really the thing that scares operators from adopting cloud native. Not anything about cloud native technology itself. The benefits are there, but there's a lot of fear with respect to that integration effort. Fortunately, the industry has done a lot of work now to integrate outside of the operator environment so the service providers can have an easier adoption for the technology.
Guy Daniels, TelecomTV (01:42):
That's good to hear, Paul. So can you perhaps share any examples where cloud native deployments have already delivered measurable operational or business success?
Paul Miller, Wind River (01:53):
Yeah, absolutely. So we've seen, and many know in the industry the incredible deployment at Verizon that we have the National 5G deployment there, leveraging V ran and open RAN approaches, fully virtualized, 100% cloud native, fully container-based across the entire United States. And that deployment, Verizon executives have publicly stated in interviews the fact that it's equal to or superior to traditional RAN performance. And so they're quite happy with the benefits that they're seeing from deploying a cloud ran technology. And of course we've seen similar things with Boost Networks and Vodafone in the UK and many other service providers that we work with. So we think we're well past the point of maturity here and it's proven, its TCO and proven its operational benefits.
Guy Daniels, TelecomTV (02:39):
So if we've proven benefits already, if you can then look ahead, how do you see the convergence of telecom, cloud and enterprise IT shaping the next wave of telco innovation and where does Wind River play in this evolution?
Paul Miller, Wind River (02:56):
Yeah, it's a pretty interesting question and it talks to the evolution over time as to what's happened with cloud technology in the service provider industry. As everybody knows, if you go back many years, we had the transition from proprietary appliances to NFV network functions virtualization, and that was primarily adopted in the core of carrier networks years ago by and then virtual ran and open ran start launching. And given the time that had gone by, that started out of the gate as a cloud native solution as we've talked about. So we have a lot of deployments there. The thing that's starting to change that we're seeing globally as we talk to operators is they don't like the idea of managing multiple types of clouds. Obviously for each type of cloud, they have to train their personnel on how to operationalize, deploy, and manage these systems. And they prefer whether it is at the farge for ran near Edge core IT enterprise, O-S-S-B-S-S, back office, all of these virtualization systems, they prefer them converge into a single technology. So a technology that can best do that, that can operate efficiently at the edge, at highly distributed scale, but still support the enterprise class workloads and the core of the network is really attractive to them as they look at their next generation. I think we'll see a single cloud technology converging core to edge.
Guy Daniels, TelecomTV (04:11):
Well, let's build on that then Paul, and talk about the differentiation and what Wind River brings to telcos because the company has built its reputation, starting from the edge and moving towards the core. Why is that pathway different and more effective than starting from the core and then trying to extend into the far edge?
Paul Miller, Wind River (04:31):
Yes, that's true. Certainly Wind River in the scope of vendors providing cloud technology was quite late to market. We've been in the market for about seven to 10 years now, and we started our high scale deployments in virtual ran, which transitioned naturally to open, ran at very, very high volume. And the interesting challenge there as you look at a virtual ran deployment is the hardware acceleration, the low latency, the high performance, the low footprint. There's some really unique requirements there. And supporting those hardware accelerators, providing the ability to manage those things, providing a low latency kernel, the ability to sit on that platform and run a minimal number of cores, like a single core of overhead for the entire virtualization solution. These are things that drove us to an extremely efficient, cost-effective solution. Now as we move back towards the core and we look at deploying there, and we've got several wins in that space, the workloads are actually much easier.
(05:27):
You go from having a highly distributed architecture, one that's more centralized, you have less specialized hardware accelerators, and you're not as sensitive to the high latency. So we're finding that what we built for the far edge is it's much easier to move to the core. Our competitors though, however, as they've built for the core first in IT enterprise, and they try to move now to the edge and try to run low latency or specialized hardware, accelerated workloads, they're having a really hard time with that because that requires a pretty significant knowledge base in real-time systems in order to do that efficiently. So that's why we're finding our movement from edge to core to work pretty well.
Guy Daniels, TelecomTV (06:05):
And as you've said, wind River's transition to open ran, and you've achieved some significant wins in these with these deployments. What lessons from these projects best illustrate your strengths in real time performance, low footprint, high efficiency, and of course automation?
Paul Miller, Wind River (06:24):
Yeah, I'd say that over time we started with the CAS layer, the core cloud native infrastructure and hardening that. And that took quite a few years to get to the point where you could deploy a geo distributed V ran or open ran network and the tens of thousands of nodes that you need to do for a service provider. All geo distributed, all low latency, and that's where our early investments were. As that product though, became much more stable in the service provider networks. Our more recent years have been spent around operational challenges, right? The service providers really want this cloud native approach to be equal or superior, ideally superior to the legacy approaches they used to use. And that means the ability to upgrade in the field and how many sites can you touch within a maintenance window, is that equal to or better than the traditional RAN approach or the traditional telco core approaches where these systems were more vertically integrated from a single provider?
(07:20):
And the vertical integration obviously means that things are higher speed because it's one vendor controlling the entire upgrade process. That's more challenging when you have a distributed architecture with multiple vendors in play. And so we've started building over the past three to four years, a lot of operational tools, orchestration, automation, analytics, these type of things, and fully integrated them into the stack as we've learned through live deployments with our customers to make sure we have all the tooling so that in day two operations we're equally as effective as we are in the initial technology selection.
Guy Daniels, TelecomTV (07:53):
Well, can we talk more about analytics now? And I'd like to ask you how your platform stands apart when it comes to integrating analytics and observability, orchestration and automation into telco grade Kubernetes stacks?
Paul Miller, Wind River (08:08):
Yeah, sure. So it's kind of interesting the different vendors approaches to this. We don't take a kit apart approach with our technology stack. We integrate Linux with the cloud technology. It's not like you deploy Linux and then deploy that virtualization platform on top of that and then deploy applications. It's a fully integrated stack. And so when our customers consume this technology, they know it's continuously integrated with the base technology that they're deploying. In this case, Linux and Kubernetes. And then as you mentioned with analytics and orchestration and automation, these are also fully integrated and continuously regression tested together as a full suite of products. So this means that that complete stack is fully functional and hardened as it's delivered to the customer. These things aren't on separate or separate business priorities for our business, Linux versus virtualization versus analytics and orchestration. They're a unified offering. And even in our open source initiative, Starling X, which is hosted in the same community where OpenStack has built the open infrastructure foundation, all of Linux, all of Kubernetes, all of OpenStack, the entire thing is one vertically integrated solution. And that has been really attractive for the service provider market. They're getting the best of open source, but getting a more bespoke stack that's fully integrated, which is more what a telco service provider expects.
Guy Daniels, TelecomTV (09:30):
Yeah, absolutely. Thank you, Paul. Well, I think we should finish by discussing AI and new opportunities because we know AI is rapidly becoming integral to telco transformation. So should operators integrate AI initiatives and network ai, AI ran efforts with their cloud native journey? Or is AI and cloud native are these two separate tracks?
Paul Miller, Wind River (09:54):
I think they're intertwined, right? Obviously pretty much any AI initiative is going to be container-based as current technology is built in that type of virtualization platform. They're not critically dependent on each other though. And what I would say is that if you look at the AI ran alliance and AI ran initiatives, the idea of doing things like dynamic beam forming and continuous power management and being agile with the infrastructure that you have, that's what AI is really about from a far edge perspective. And we see that emerging and that will follow a natural technology curve of adoption there. None of it's deployed today, but as you can see from the RAND providers that are out there and Major Rand providers are doing a lot of activity in this space, I fully expect that to be adopted and deployed. And our infrastructure of course supports that.
(10:40):
The thing that's a bit more exciting for us is we've seen the impact of AI on operations and operations automation. We've even done demonstrations on telecom tv where we take a agentic implementation with a large language model and show how you can automate API control of all these deployed systems, that ability to deploy AI not only at the far edge for AI ran or AI applications and hosting them, but also to use it operationally to significantly reduce opex for running these networks and improve the ability to debug problems. That's got us really excited and we see that that will be adopted before the AI ran type applications.
Guy Daniels, TelecomTV (11:18):
That's interesting. Now we also know that Edge AI has a lot of promise. So what does Edge AI mean for telcos in practice, and how do innovations GPUs as a service or agent AI for API automation, how do these change the operator's ability to run and perhaps more importantly monetize their networks?
Paul Miller, Wind River (11:39):
Yeah, well certainly the technology stack changes, obviously what we had before is really an Intel a MD or ARM-based compute platform that we're running Linux and a virtualization technology on now with ai, pretty much any AI or generative AI application requires GPU or AI accelerators to be present on the system and to be integrated. So we have to be able to do that at high scale to support those type of specialized silicon platforms integrated with the cloud technology. GPU as a service tends to enable the service provider to post that infrastructure and run multiple applications on that GPU with things like pum, macor pinning or GPU time slicing, these sort of things that become features in the platform that really enable the service provider to monetize that platform from an AI perspective. The other thing that it happens is on the layer, immediately above the virtualization layer, you start needing to support new open source components. Things like TensorFlow, tiny ml, these type of application and enablers are part of the software suite that's necessary to deploy on the platform in addition to the GPU accelerator support to really support the applications that the service providers are bringing to market.
Guy Daniels, TelecomTV (12:57):
And we're also hearing now more about sovereign cloud requirements in telecom. How do you believe the industry can help operators meet these digital sovereignty needs while still benefiting from open cloud native platforms? Yeah, I think we're well
Paul Miller, Wind River (13:13):
Positioned for that. A lot of the technology we've built, and when we use the term cloud, a lot of people not skilled in the art yet immediately associate that with public cloud, right? Things like AWS and whatnot. That's not what we're talking about here. Of course, as we talk about cloud native on this program, we're really talking about private cloud instances that are deployed in controlled environments that the operator owns. They deploy and maintain that infrastructure. As a private cloud, that's very much what a sovereign cloud is, right? It's an air gapped environment that allows a particular entity that's concerned about security or data isolation, this sort of thing, to run a cloud infrastructure to host their applications for their own purpose, but one that's not shipping data off to a public cloud infrastructure. And so the type of technology we're talking about on the show and all the vendors provide is really for that. Now, certain attributes of that technology, the ability to support multiple sites within a single pane of glass to ensure data isolation and traceability, GDPR compliance for example, in the uk, these sort of things are additive capabilities that really make a technology platform ideally suited to sovereign cloud. And as a business, we're very focused on those attributes.
Guy Daniels, TelecomTV (14:25):
Thanks Paul. And finally, I'd just like to cover an area that's generating a lot of interest at the moment, and that's the automotive sector and providing services for that vertical. How can advanced services such as OTA updates, cellular V two X or compute offload, how can these open up new revenue streams for operators?
Paul Miller, Wind River (14:46):
Yeah, it's a pretty interesting area and it's really an example of a large customer problem statement, which is really across the entire globe. As we deploy 5G, we find that most of our customers are quite unhappy with the monetization of 5G network. Slicing hasn't really emerged as a 5G SA capability globally. There isn't monetization beyond the typical consumer device. Alright, so we've got higher bandwidth and lower latency, but where's the revenue that I was supposed to be able to build with 5G? And that's very commonly stated to us. One of the interesting things about our business is we're in many billions of devices. Our Wind Rivers technology is used in aerospace and defense and medical and industrial and automotive as you mentioned. And so the question is how do we take those devices that are now being connected to the service provider network and provide paths of great value that the service provider can enable that allows them to further monetize their network?
(15:43):
And the examples you provided OTA and CV two X or cellular vehicle to anything are examples where the network slicing and the ability for the service provider to create connectivity value between these devices represents new revenue stream potential, right? So cellular vehicle to anything enables vehicles to perform accident avoidance, not just from their own internal compute algorithms, but by sharing for example, geodetic location and velocity between the vehicles so that the a s self-driving algorithms can be modified to improve user safety in the vehicle. So there's a lot of exciting potential there for comfort and convenience and safety in this sort of thing that the service provider is intricately linked with. And that provides a path for a new revenue generation. And we see this also with private 5G and industrial and manufacturing in many areas. And we'd certainly encourage anyone to come talk to us. We have a lot of expertise for those edge systems.
Guy Daniels, TelecomTV (16:39):
Great. Hopefully a lot of opportunities there for telcos. We must leave it there though, Paul, as always, it's great talking with you and thanks so much for sharing your views with us today.
Paul Miller, Wind River (16:47):
Thanks Guy. Great to be here.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Paul Miller, CTO, Wind River
If cloud native is to play its part in shaping the future of telecoms, it needs to avoid being just another layer of added complexity, warns Wind River’s Paul Miller. He discusses how best to approach this transformation, reveals what’s working and what isn’t, and shares some blueprints for success.
Recorded September 2025
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.