To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/lQRAqTCslrU?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Hello, you are watching telecom tv. I'm Guy Daniels for service providers undertaking a cloud-native transformation. What are the critical success factors to ensure board level commitment and a successful implementation of their technology and process strategies? Well, joining me now to discuss this is Sean Cohen who is director of product management hybrid platforms at Red Hat. Hello, Sean. Very good to see you again. Now, red Hat has considerable and proven expertise in open source hybrid cloud and cloud native technologies. So as such, which business drivers do you recommend to service providers who are seeking this all important board level commitment to their cloud native journey?
Sean Cohen, Red Hat (00:57):
That's a very good question, Guy, and glad to be back. I think that when we look at the drivers, I want to actually start with the business needs, right? The business needs are speed to market and service agility, right? And cloud native unlocks, I would say both, right? With the ability to deliver services faster, the ability to introduce new services faster and so forth. So that's the key benefit of moving to a cloud-native deployments. But the drivers are typically more around, I call it the making money or saving money fund, which is the first one is drive cost optimization, and the second one is driving new revenue strengths. It's the two sides of the coin. So the goal is the ability to actually monetize and deliver faster services with furthermore agility, but at the same time, you have to make trade-offs because you are required to make this cost optimizations and you are required to drive new revenue stream.
(02:06):
So there's a balance that needs to be happening indu to your areas. And if you want, I can give you a few examples. So one of the examples we can look is obviously going back to the tight budgets and funding levels that we see across most of the businesses remain flat growing as we embark the next year. So you are requested to actually improve your spend as well as obviously we still maintain all this legacy technologies. We have two G, 3G 5G steel power on, and at the same time you need to allocate funds towards the node cloud native as well as the AI native investments. And sometimes they go together for me to drive forward and to drive a new, we talked about the service agility part, AI driven service, I have to reduce cost and optimize my existing costs. So this is the two main drivers we are seeing when it comes to that investments.
(03:13):
And obviously there's specifically, you mentioned red being a leader in the open source. We are working with customers to give them the choice, but also driving and partnering with the ecosystem through open source initiative such as the open ran as a one good example. Another one is more on the AI opportunity front. Reddi is involved the cera APIs initiative under the Linux Foundation to drive that new ways to simplify the network complexity, but also finding new ways to introduce services and sell new services or even fast services to healthcare and other verticals by exposing this new API. So the new APIs exposure that we are working with the community on and the ecosystem is to drive that service agility. But if this goes back to what I said in the beginning of driving the need to drive new revenue stream. So some of the investments we're seeing also from red episode with the ecosystem is towards unlocking this open frameworks if it's open. RAN also talks about obviously using more standardized APIs as well as the camera APIs. This are two good examples that we're doing specifically in the open source.
Guy Daniels, TelecomTV (04:41):
We spoke there of trade-offs and also you mentioned ongoing historic investments. So how do you structure a phase migration blueprint that reconciles legacy network functions with cloud native adoption?
Sean Cohen, Red Hat (04:57):
Yeah, so I think Red Hat approach is actually towards meeting customer where they are, but at the same time allow them to modernize, right? The goal is modernization, but for us to get to that modernization point, we need to improve a lot of aspects of the way we work. And some of it is actually adopting already cloud native or applicable and unifying the infrastructure. I'll give you some examples. If we bring an image up, this is a great example of what we've done recently. We've Red Hat OpenStack, OpenStack as you know is our cloud infrastructure offering mainly for 4G LTE networks that we have many, many service providers running in production. But at the same time, even for those customers who run those 4G LTE, they need to introduce 5G services. Sometimes these services are actually on the same platform or dependently on each other. So you may have a 5G service that is actually on the 4G resources.
(05:59):
So for that, we actually introduced the concept of Red Hat OpenStack services on top of OpenShift, which if you think about it is the OpenStack running on top of Kubernetes. This actually allows us to uniform the platform. So if you look at the diagram now, we can actually have a platform that allows us to run both VEPC and VFS on the same platform, but also 5G ran CNFs, and this is thanks to that unified infrastructure that the full factor of the deployment is OpenShift cluster that hosts the control plate for OpenStack as we can see on the right side, and that allows us to run cloud native functions, but as well as the virtualized functions running on top of OpenStack with OpenStack APIs. But the key point, it's a unified platform. So by introducing this capability, we're now able to take gradual move, Hey, I'm studying 4G, I need to now deploy 5G services.
(07:02):
5G is already cloud native Kubernetes based in our case, OpenShift based. Now I want to get all the benefits of all this capabilities. So for us, this is where it's driving this capabilities is to have that unification and some of the benefits for the cloud providers are great because we're now talking about even uniform hardware. As you modernize from virtual machines to CNFs, from PNFs to CNFs, you sometimes won't even take this existing huddle review to that journey. So you only optimize the cost, right? And using that unified platform, you can actually allow you to do it because we natively allow customers within the OpenShift console to actually move the hardware between OpenStack and OpenShift. You also get a unified observability across the platforms and more importantly, the skillset are unified, right? So what we've seen is many service providers that used to have dedicated team to do power 4G LTE and a different team to power the 5G LT. Now it's actually uniform because they actually need to do both. So we're going to talk about later probably on the skills and how we close the skills gap, but this is how we do it from a platform perspective because it's the same tooling, the same way to deploy OpenStack as well as OpenShift in a cloud native fashion and take unique gradual for that 4G to 5G to cloud native adoption.
Guy Daniels, TelecomTV (08:31):
Yeah, this is really interesting. Sean, what reference architectures are most used by telcos for their core edge and ran workloads?
Sean Cohen, Red Hat (08:40):
Thank you Guy. So yeah, I think the most ones we actually see are around the telco ran the telco core edge. In fact, we have several of those. So if you look at the image, we have what we call the integrated and validated blueprints that we offer as reference architecture that you can go to our Red Hat architecture portal that allows you actually to download those, right? And this is focused only on providing a modular blueprint so telcos can actually adopt specific needs. You can just look at like, okay, I want to look at the improving the network and starting just the network move cloud native as an examples or I want to actually introduce a 5G core. What will the step by step to do it? So what we see on the right side is actually how to build a 5G car architecture, which is another link you can download, which is a very long document of this is where we want to go.
(09:46):
At the same time we're actually providing you the architectural blueprint that you can actually use to drive that. And this will go and describe more in details the configurations of the clusters, how to design and host the 5G workloads in the run. And obviously it captures also our recommendations when it comes to all the tests we've done of the configuration that know that works and validated so you can deliver that reliable, repeatable performance as you deploy this reference architectures. So a lot of it is like being worked with obviously our Red Hat teams and we have a very large ecosystem engineering team that helps sponsoring that, but there's a lot of work we also do with the ecosystem partners. So we have a lot of specific reference architecture, even with a dedicated providers to help you actually start with the right choice. So obviously every service provider already made some choice when it comes to equipment.
(10:52):
This actually takes care of that because we have all this choice. So yeah, encourage everyone who's new to this journey, basically start with this portal, get a sense of all the things that are available in the Red Hat Portfolio architecture center. And we also have examples, as I pointed out, like a whole portfolio example so you can download, it's basically think about, it's a shortcut to start using a validated architecture that we know that works in specific use case such as Core Edge and ran, and then here's how we I start off. So that's basically our commercial help that is very collaborative model in working with ecosystems as well to drive those.
Guy Daniels, TelecomTV (11:42):
Excellent. Thanks very much, Sean. Well, let's move on to security. A really critical issue for telcos. How is security by design and zero trust being embedded into your cloud native operations?
Sean Cohen, Red Hat (11:55):
Yeah, so I think security, I think I talked about it before, security is a layered approach. Obviously when we look at even zero trust being embedded into the cloud native, it's like every step of the layer, all the way from the hardware to the application layer, and now AI and AI agents. But I think with cloud, as we introduce this new service, I mentioned the service agility to introduce new services. We also see new set of workloads running now on the platform such as AI and Gentech, ai, ai. And so the classic zero trust that we had before when was enforced to the hardware or the virtual machine level or the container level is now being not replaced by augmented with the same mindset of never trust and always verify, but the application level. So I'll give you some examples when we talk about zero trust and specifically how to enable zero trust in cloud native operation.
(13:05):
So the first one we're doing it in OpenShift is we actually introduce a specific operator for a zero trust workload identifying, and this actually tackles the other side of the coin, which I brought up. It's not the infrastructure level or application of, it's actually the machine to machine authentication, if you will, where we have baked and integrated with both the S Fifi inspired open source frameworks to provide that unified identity insurance. So you can have that access control regardless of the environments on location. So especially in service providers, we just talked about poor all the way down to the edge, the footprints are very vast in terms of deployment, but the models are now going to be very similar because this is the services I can have AI workload very close to the endpoint, the edge point, and I can run almost like AI service, which is more wide in the core, but the authentication and the zero plus needs to be actually addressed by both. And I think this is where we as our responsibility as open source leader is to be able to address that by integrating those. And this vote goes both, as I pointed out for AI virtualization use cases and so forth. So the goal of introducing that identity manager for zero trust is actually to reduce the risk of cyber attacks, enhance the multifactor authentication.
(14:48):
We talked about also the moving from long-term token to short-lived token. So this is all being done very fast machine to machine level. So you can have that multifactor in, but also at the same time to allow you to reduce automation and maintenance costs because you're moving to a more cloud native way with zero trust, but without sharing your secret and private keys, right? This is key because workloads can span multiple identity domains and you need to be able to have that centralized trust access and obviously it doesn't end there. You can even integrate with third parties like Harshi cops when it comes to the secure secret management and so forth. Last thing I want to point out, but we talked about the layers of the zero trust. We at the core of our offering of OpenShift, if you just look at specific areas, not just the course security feature, but even the networking level, we fully integrate our networking features component across the board to allow that zero trust security all the way down and it manifest the microsegmentation we're doing or the network observability part when we now ability to use even frameworks like anomaly detection to make sure you're meeting this compliance.
(16:18):
It can be at the advanced integrations we have over the rest of the OpenShift portfolio, such as the ACM, which are advanced cluster security or advanced cluster of management. So the zero trust is carried across not just in the different layers as I pointed out, but also in the offerings we have all the way down from the hardware, the way we do workload and at the stations all the way down to the AI and how you deal with agen AI today in 5G deployments. So it's pretty much layered, but again, the good news, we're working on it in more than one level, and obviously it's all about securing the workloads and increasing detecting that threats and preventing a lot of this risk ahead of time by adopting automation and all this practices that we're doing.
Guy Daniels, TelecomTV (17:18):
Well, a final question for you, Sean, and this is another important issue for telcos. How do you assess and bridge the cloud native skills requirements to maintain this new service velocity and agility we spoke of to ensure continuous innovation?
Sean Cohen, Red Hat (17:35):
So that's a great one Guy, and again, we talk about it's a journey. The adopting cloud data is a journey. The starting point in a lot of the cases is existing teams. It's not like, Hey, tomorrow I'm going to hire a whole army of new people, which is certified and cloud native and DevOps and SRE by skillset, you need to modernize not just infrastructure, you need to modernize your human resources as well, and your teams. Earlier I showed a framework of adopting 4G to 5G using the same architecture with OpenShift, right? And the tooling. So part of it is actually the tooling going from, I used to do this. I used to use this tools for specifically 4G LD. Now I have new tools when it comes to CICD that I'm adopting with cloud native. And now as we do this shift, we're actually able to unify the tooling, unify the teams.
(18:32):
So the skill gap is actually being addressed also by technology, but a lot of the cases also. So you have to close that skill gap. And at Red Hat, we offer more than one option to close the skill gap, right? First of all, we recommend customers like that. We have obviously consulting engagements where we can provide the hold your hand until the ready approach where we not just help you with the reference architecture that we covered earlier, but actually setting it up, building the expertise with the team and do the knowledge transfer. But I think it's more deeper than that. This is where we came up with our program called Fly Path, right? That allows you to, it's a framework that allows us to help customer, first of all doing the analysis phase, what we call the assess phase, like the free a's assess and accelerate, where we first of all bring our expertise, but see where the customer is in the maturity level.
(19:33):
Some of the customer already have adopted practices or tooling and so forth. So the starting point may be different for service provider to service provider. So this is all about, first of all, getting that baseline and then assist in actually getting that right? And this is where we allow the case to even use the blueprints we already have to drive them with the customer to the needs and then accelerate by making sure that they're getting all the expertise and knowledge as part of that via this well-defined blueprints that we provide that can basically a lot of the case we co engineer with our customers and partner. So beyond fly path, we also have looking at the skillset, as I pointed out as a transfer to the team, but a lot of we are looking at the more building across functional teams as a practice. It doesn't end with us coming help and helping you until you're ready.
(20:34):
It's all about making sure that you as a service partner is actually empowered and building this center of excellence within the organization around embracing DevOps and SRE. So you can start with a core team and then grow it up to more building a cross-functional team. So via knowledge transfer and skill development. So part of the work we do with our framework is actually educate the customers on these practices, but the whole point is for the customer to be independent and to be able to run it. And in a lot of the cases, it's like the continuous innovation by fostering a more skilled workforce that can help bridge the skill gap. So it's a combination of technology best practices in validated architectures that we have alongside training, alongside actually making sure that you build that mindset of client native within the teams and grow it within the team. One last example I can give is embedding a member of our Red Hat, consulting within the team for a period of time, and then share the knowledge and then step out, right? That's another gradual methodology we use to help customer as they adopt the 5G.
Guy Daniels, TelecomTV (21:55):
Fascinating. But we must leave it there for now, Sean. It's really good talking with you again, and thanks so much for sharing your solutions with us today.
Sean Cohen, Red Hat (22:02):
Thanks Guy. Always a pleasure being here.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Sean Cohen, Director of Product Management, Hybrid Platforms, Red Hat
Red Hat’s Sean Cohen explains the essential business drivers for service providers undertaking a cloud-native transformation and looking to gain board-level commitment. He discusses how service providers should structure phased migration blueprints to reconcile legacy network functions with cloud-native adoption, and what reference architectures telcos should use for core, edge and RAN workloads? He also explores how security-by-design and zero-trust principles should be integrated into cloud-native operations and why telcos need to assess and bridge skill gaps for continuous innovation?
Recorded October 2025
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.