To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/hJjhTOTlFVU?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Clarence Reynolds, TelecomTV (00:13):
What are the key challenges and opportunities for telecom companies as they navigate the complex landscape of cloud native technologies? Greg Dalle, director of product management for service providers at F5 is here to share his insights on the cloud native telco ecosystem and its hurdles. Greg, thank you for being with us today. I want to begin by asking how you would describe F5's role in the cloud native telecom ecosystem?
Greg Dalle, F5 (00:41):
So first of all, F5 in the telco environment, and people know F5, we have about 20,000 customers, but people don't really know how much we do with service providers. We have hundreds of service provider customers, and we do application delivery and security with all parts of service providers, and we say telco, but the reality is that includes also cable companies or MSOs, as we call them in North America. And so we work with their IT department, with their enterprise services and also with the networking components of telcos. And those networks obviously are so critical to offer services to consumers, enterprises, and other telcos. The second part to your question is cloud native. So when we play in cloud native, we play multiple layers. So we are part of the platform, we're part of the applications, and we even run our own cloud services.
(01:45):
So call it drink your own champagne if you want. So let me give a little bit more color on these three things. The first one is we're part of the application. So obviously to run cloud native, you need a platform where you're going to run the cloud native functions. So we are part of the platform. We do all the communication within clusters with the outside of the clusters, the security that goes with it. So service meshes, ingress, egress, and in that case, we work with the platform departments at the service providers. Then we have cloud native applications or CNFs out there, codes in particular, we do security, gilan, N six line, DNS, car grid, nuts, firewalling. And so we see actually the cloud native challenges from the other side, not from the platform, but from the application themselves. And then the third I mentioned is our running of cloud services with F5 distributed cloud. So that means we have our own network functions and we offer them as a service and we operate the cloud service ourselves. So we basically have, for example, SRE since we're talking about skills for cloud native in F5. And so we understand all the challenges of running and operating a cloud service. So those are the different areas, but if you look at where F5 plays with service providers and where we are in different parts of the stack, this gives F5 a really unique position in this new cloud native stacks.
Clarence Reynolds, TelecomTV (03:28):
So Greg, we understand that a skill shortage is a major hurdle for telecom companies and seizing cloud native opportunities. What are some of the other significant challenges you see ahead?
Greg Dalle, F5 (03:40):
Yeah, so I think TelecomTV ran a survey last year at the Cloud native summit. And obviously scales was the number one challenge perceived by participants, but very closely behind you had basically how you design, implement, operate those cloud native environments. And so this is really something that we're experiencing and helping our customers. Our service provider customers address is how you put together those cloud native environments. And I'll zoom in on something more specific. So obviously cloud native, one of the key component of it is Kubernetes, which is the container orchestration solution that powers, it's pretty much the defacto stand up for cloud native. And part of it is Kubernetes networking. So how do I connect those containers, PO services? And if you think about it, Kubernetes was designed almost 10 years ago by developers, not by networking experts, developers. And so the networking you get from Kubernetes is very basic, but if you think of service providers, they have obviously very complex use cases where they need to communicate with the radio access and the 5G core and the various services, et cetera.
(05:10):
And so this complexity as well as the complexity of the network integration that goes with it, integrating with different VPNs, different tenants, different weir protocols like diameter and Citron and sip, et cetera, all these complexity doesn't fit very nicely with Kubernetes. And so we call that actually the Kubernetes ball of fire. And that means this complexity that leaks outside of the clusters in trying to interconnect those complex application with complex networking environment. And so part of what we do at the product and my team of product managers do is define a solution that abstracts this complexity and addresses this Kubernetes BO of fire so that we have one integration point for all this networking security visibility as part of Kubernetes. So we didn't want to work around the weaknesses of Kubernetes, we just wanted to extend it so that we can still take advantage of Kubernetes, but address this complexity, this Kubernetes ball of fire.
Clarence Reynolds, TelecomTV (06:23):
And I really want to focus on that ball of fire, the Kubernetes Ball of fire. Is this issue specific to telecom applications or does it represent broader challenges with Kubernetes adoption?
Greg Dalle, F5 (06:35):
Well, that's a great question and obviously one of the key theme for service providers today is ai, right? So either for their own internal use to serve their customers and supplement their services or to offer AI or AI factories as a service, right? Or have local LLMs in their own language respecting the sovereignty rules that is specific to their country, et cetera, et cetera. And so if you think about it, AI applications, they're all cloud native, so they're all going to have to integrate with Kubernetes like 5G applications. They are complex applications with multitude of POS and complex data management, like 5G applications. They have to integrate in sophisticated environments. Those AI factories will have different tenants, different operational groups in the service provider, different networks, and also they will need very, very strong performances. Like in 5G CNFs, I have to run at very high throughput with a very, very large number of sessions and subscribers.
(07:57):
Same thing happened with AI even at a higher scale. And so if you see all the challenges that we've experienced with running 5G on cloud in a cloud native way, those same challenges integrating those complex AI applications with the platform and with Kubernetes, those are exactly the same challenges. So I like to say that the 5G applications are more race cars. They're not your everyday car. Same thing happens for AI applications. They're also race cars, very demanding applications that need some unique solutions to address their need. And so obviously AI has got also different requirements, additional requirements that we don't see in 5G in particular, the AI infrastructure, the hardware that powers those factories is unique, right? If you think of the GPUs and other hardware components that are there, it's very critical that whatever software solutions that we built to address cloud native can be optimized and perform at the maximum speed by leveraging the hardware that's specific for ai. So that's not going to be something we discussed at the summit this year, but I expect for the next summit it's going to be all over the place.
Clarence Reynolds, TelecomTV (09:23):
Challenges and opportunities ahead. Indeed. Greg, thank you very much for your insights today.
Greg Dalle, F5 (09:29):
Thank you.
What are the key challenges and opportunities for telecom companies as they navigate the complex landscape of cloud native technologies? Greg Dalle, director of product management for service providers at F5 is here to share his insights on the cloud native telco ecosystem and its hurdles. Greg, thank you for being with us today. I want to begin by asking how you would describe F5's role in the cloud native telecom ecosystem?
Greg Dalle, F5 (00:41):
So first of all, F5 in the telco environment, and people know F5, we have about 20,000 customers, but people don't really know how much we do with service providers. We have hundreds of service provider customers, and we do application delivery and security with all parts of service providers, and we say telco, but the reality is that includes also cable companies or MSOs, as we call them in North America. And so we work with their IT department, with their enterprise services and also with the networking components of telcos. And those networks obviously are so critical to offer services to consumers, enterprises, and other telcos. The second part to your question is cloud native. So when we play in cloud native, we play multiple layers. So we are part of the platform, we're part of the applications, and we even run our own cloud services.
(01:45):
So call it drink your own champagne if you want. So let me give a little bit more color on these three things. The first one is we're part of the application. So obviously to run cloud native, you need a platform where you're going to run the cloud native functions. So we are part of the platform. We do all the communication within clusters with the outside of the clusters, the security that goes with it. So service meshes, ingress, egress, and in that case, we work with the platform departments at the service providers. Then we have cloud native applications or CNFs out there, codes in particular, we do security, gilan, N six line, DNS, car grid, nuts, firewalling. And so we see actually the cloud native challenges from the other side, not from the platform, but from the application themselves. And then the third I mentioned is our running of cloud services with F5 distributed cloud. So that means we have our own network functions and we offer them as a service and we operate the cloud service ourselves. So we basically have, for example, SRE since we're talking about skills for cloud native in F5. And so we understand all the challenges of running and operating a cloud service. So those are the different areas, but if you look at where F5 plays with service providers and where we are in different parts of the stack, this gives F5 a really unique position in this new cloud native stacks.
Clarence Reynolds, TelecomTV (03:28):
So Greg, we understand that a skill shortage is a major hurdle for telecom companies and seizing cloud native opportunities. What are some of the other significant challenges you see ahead?
Greg Dalle, F5 (03:40):
Yeah, so I think TelecomTV ran a survey last year at the Cloud native summit. And obviously scales was the number one challenge perceived by participants, but very closely behind you had basically how you design, implement, operate those cloud native environments. And so this is really something that we're experiencing and helping our customers. Our service provider customers address is how you put together those cloud native environments. And I'll zoom in on something more specific. So obviously cloud native, one of the key component of it is Kubernetes, which is the container orchestration solution that powers, it's pretty much the defacto stand up for cloud native. And part of it is Kubernetes networking. So how do I connect those containers, PO services? And if you think about it, Kubernetes was designed almost 10 years ago by developers, not by networking experts, developers. And so the networking you get from Kubernetes is very basic, but if you think of service providers, they have obviously very complex use cases where they need to communicate with the radio access and the 5G core and the various services, et cetera.
(05:10):
And so this complexity as well as the complexity of the network integration that goes with it, integrating with different VPNs, different tenants, different weir protocols like diameter and Citron and sip, et cetera, all these complexity doesn't fit very nicely with Kubernetes. And so we call that actually the Kubernetes ball of fire. And that means this complexity that leaks outside of the clusters in trying to interconnect those complex application with complex networking environment. And so part of what we do at the product and my team of product managers do is define a solution that abstracts this complexity and addresses this Kubernetes BO of fire so that we have one integration point for all this networking security visibility as part of Kubernetes. So we didn't want to work around the weaknesses of Kubernetes, we just wanted to extend it so that we can still take advantage of Kubernetes, but address this complexity, this Kubernetes ball of fire.
Clarence Reynolds, TelecomTV (06:23):
And I really want to focus on that ball of fire, the Kubernetes Ball of fire. Is this issue specific to telecom applications or does it represent broader challenges with Kubernetes adoption?
Greg Dalle, F5 (06:35):
Well, that's a great question and obviously one of the key theme for service providers today is ai, right? So either for their own internal use to serve their customers and supplement their services or to offer AI or AI factories as a service, right? Or have local LLMs in their own language respecting the sovereignty rules that is specific to their country, et cetera, et cetera. And so if you think about it, AI applications, they're all cloud native, so they're all going to have to integrate with Kubernetes like 5G applications. They are complex applications with multitude of POS and complex data management, like 5G applications. They have to integrate in sophisticated environments. Those AI factories will have different tenants, different operational groups in the service provider, different networks, and also they will need very, very strong performances. Like in 5G CNFs, I have to run at very high throughput with a very, very large number of sessions and subscribers.
(07:57):
Same thing happened with AI even at a higher scale. And so if you see all the challenges that we've experienced with running 5G on cloud in a cloud native way, those same challenges integrating those complex AI applications with the platform and with Kubernetes, those are exactly the same challenges. So I like to say that the 5G applications are more race cars. They're not your everyday car. Same thing happens for AI applications. They're also race cars, very demanding applications that need some unique solutions to address their need. And so obviously AI has got also different requirements, additional requirements that we don't see in 5G in particular, the AI infrastructure, the hardware that powers those factories is unique, right? If you think of the GPUs and other hardware components that are there, it's very critical that whatever software solutions that we built to address cloud native can be optimized and perform at the maximum speed by leveraging the hardware that's specific for ai. So that's not going to be something we discussed at the summit this year, but I expect for the next summit it's going to be all over the place.
Clarence Reynolds, TelecomTV (09:23):
Challenges and opportunities ahead. Indeed. Greg, thank you very much for your insights today.
Greg Dalle, F5 (09:29):
Thank you.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Greg Dalle, Director of Product Management for Service Providers, F5
Greg Dalle delves into F5’s role in the cloud-native telecommunications ecosystem. He addresses the significant challenges telecom companies face on their journey to cloud-native adoption. Dalle also discusses the concept of the ‘Kubernetes ball of fire’, exploring whether this issue is specific to telecom applications or represents broader challenges with Kubernetes adoption across industries.
Recorded September 2024
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.