To embed our video on your website copy and paste the code below:
<iframe src="https://www.youtube.com/embed/bKTsdm-F-Gk?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Hello, you are watching the Cloud Native Telco Summit, part of our year-round DSP Leaders coverage and it's time now for our live q and a show. I'm Guy Daniels and this is the first of two q and a shows. We have another one at the same time. Tomorrow it's your chance to ask questions on cloud-native processes, strategies, and opportunities. Well, as part of today's summit, we featured a panel discussion that looked at end-to-end roadmaps for cloud native transformation. If you miss the panel, don't worry because we will rebroadcast it straight after this live q and a program or you can watch it anytime on demand. Now we have already received several questions from viewers, but if you haven't yet sent us one, then please do so now using the q and a form on the website. And I'm delighted to say that joining me live on the program today are Sean Cohen, who is Director of Product Management, Hybrid Platforms at Red Hat.
(01:36):
Andrew Douglas, Senior Director Global Telco Lead with Pure Storage, Paul Miller, CTO of Wind River, Francisco-Javier Ramon Salgueroro, Multi-Cloud Tools Manager, Telefonica and Chair of ETSI OSM, Carlos Torrenti Pre-Sales Solution Architect Cloud at Rakuten Symphony, and Joan Triay, Deputy Director and Network Architect DOCOMO Euro Labs and Rapporteur of ETSI, ISGNFV. Well, hello everyone. It's really good to see you all again. Thanks so much for joining us on the live Q&A show. So let's get straight to our first question submitted by one of our viewers and let me read out this question to you now, how do you bridge the cloud native skills gap to maintain progress and ensure continuous innovation? Paul, I wonder if we can come across and get your views on this first
Paul Miller, Wind River (02:36):
Thanks Guy. I think we've been, as an industry struggling with this for years. We all recognize that one of the biggest barriers to velocity and cloud native operations is not just technology adoption or the platform you're choosing, but the skills required to fully operationalize it. We think that an approach where you design the platform, the analytics and the orchestration layers to actively reduce the skills burden while also creating pathways for continuous upskilling is the way to solve this problem. It starts with a few things. How do we actually do that? The first is simplify the platform choice. One of the things that we like to do is we integrate a production ready carrier grade Kubernetes stack that's been proven at scale with built-in automation, high availability and real-time capabilities. Effectively, if you do this, you're delivering a complete platform, right? That's fully integrated. It's not a toolkit.
(03:33):
Doing this really eliminates a lot of the tuning and integration work that a customer is often challenged with and therefore you're reducing the level of expertise and knowledge that the customer needs to have in order to deploy and operate the system. They become more focused on outcomes rather than the plumbing of the system that they're installing. I think the next piece is critically important, which is really about operations. Those of us in the vendor community, we're always focused on the bits and bytes of the technology that we provide, but really for a customer, a service provider, the challenge is really about operations and operations, particularly at scale. One of the ways that we think that should be solved is with analytics that embeds AI driven insights and predictive alarms, closed loop automation into that operational fabric and thus, instead of requiring every operator to be a Kubernetes or observability expert, you're allowing analytics to distill that complex telemetry into actionable guidance that they understand and that allows teams that may not be experts in say cloud native infrastructure to maintain service velocity even if their skills are uneven.
(04:42):
Then next of course, orchestration and lifecycle management of the deployed applications. You've got to have a unified orchestration and application lifecycle management platform eliminating complexity so that people can deploy and manage workloads seamlessly in this complex environment. Of course, as you look to upskill people, making sure that the platforms that you deploy all of your teleco infrastructure on are based on open source so that you're using things that we have a growing and broad talent pool that's emerging already pre-trained as they enter the community and start using the technologies that we bring into the service provider. So I think in short, we don't assume that the cloud native talent gap is going to disappear overnight, but taking an architectural approach where you integrate automation, abstraction analytics directly into our platforms and then leveraging open source and this sort of thing is really going to help ease the burden for operational teams trying to transition to cloud native technology.
Guy Daniels, TelecomTV (05:42):
Great, thanks very much Paul. Good take that on the technology architecture approach to this question, but Sean, I'm going to come across to you as well. Have you got thoughts on this?
Sean Cohen, Red Hat (05:52):
Yeah, so I'm basically want to emphasize, right, obviously the last point was relying on open source, but I think the way we address specifically at red up because we work with so many service providers, the skill gap is actually through a multilayer layer approach and the first one we have actually program we established called Flight Path, which is basically, I call it the consists of the free a right assess assist and accelerate. So typically we start with a maturity assessment, looking at the existing service provider skills to identify the gaps but also the strength in the team and the assist part is more creating that strategic plan, basically guiding the customer through the organization transformation. A lot of the cases is not even technology related, it's like processes and people, especially when you have silos of operations and so forth. So that comes into play and then the third party is relate to what we discussed previously around the blueprints in the industry.
(06:59):
They accelerate and that accelerate is basically expediting the progress through a well-defined reference architectures, blueprints, open source capabilities, basically to emphasize that, to help basically navigate, one of the things we're doing is providing a hold your hand until you ready approach, which is allowing that operations team to get the DevOps SRE principles and practices while working with our consultant team for a degree. And additionally, I think one of the things that we've seen successfully is including new talent, right? As you do the recruitment, bring new blood, it can be even recent graduate, but also combined with third party consultant to a degree to allow you to get that gradual knowledge skill that to allow you to operate. Keep in mind a lot of it is like, Hey, I need you now to move to a more GI ops or a pipeline and then having this teams help you build this can really help move forward and close the skill gap.
Guy Daniels, TelecomTV (08:04):
Fantastic, thanks very much Sean. Yeah, there's obviously, it's good to hear that there's a lot of effort going on at the moment to address this skills gap. Joan, let's come across to you as well and get your comments.
Joan Triay, DOCOMO Euro Labs & ETSI (08:14):
Yeah, thank you. I wanted to add just briefly a couple of points also more from the operator point of view. I fully agree with what has been said and I think it's also very important to emphasize that there is a very good opportunity for the operators to whatever there is a skill gap to actually complement it with the right partners, partners that have good knowledge with these technologies and that can help the network operator to understand how to use them. Now I would say that just only with doing that is not enough. I mean there is also need a change of mentality in the operator side. For instance, in our case, in our company, it was identified very importantly the need to also restructure to reorganize the company to make sure that the right skills were being leveraged to deploy the network at scale. For example, we had in the past siloed domains of network management development, implementation and so on.
(09:24):
Since we visualized that this transition was coming to civilize, the network, a dedicated department was created in the company to actually help bridge all this knowledge internally to all the domains. We have a dedicated service and platform design department. They are the ones that are actually responsible to build the infrastructure and the platform, not just only for the core network, also for the access. That brings excellent opportunities also internally within the company because there's a lot more transfer of knowledge and collaboration between different departments. And one thing that I would like to emphasize is this is of course dependent on having the right skills, the good teams, and here I would like to put a lot of emphasis and importance on having the right team leaders. In our case, for example, in the company we successfully identified we have the right people leading teams with long experience in telco development, telco operations that themselves, they also re-skilled towards this cloud native way and they are the ones that are helping the rest of the teams to also come along and come together and that effectively it's a well way forward and good strategy for the operator as I said, also trying to collaborate and having a good transfer of knowledge with the right partners.
Guy Daniels, TelecomTV (10:59):
Thank you very much for that contribution. Excellent. Francisco-Javier, let's come across to you and get your thoughts on this question about bridging the skills gap.
Francisco-Javier Ramon Salguero, Telefonica & ETSI (11:09):
Yeah, I concur with many of the strategies being enumerated and having good partnerships is AMA in order to progress fast and because there is obviously a gap for a specialization where you need to have the best balance, but also I think that is in our case, there is some good tracks for formal training, for formal technical training that is a compliment to actually learning. By doing so, I mean not just having the theoretical knowledge but also being able to exercise how gradually with different levels of sophistication so you can actually understand implication of what you are the technology you are dealing with. And it is not just becomes management, pure management exercise and also connected to that it's that you need to identify different assignments in the company that can do a technology transfer and knowhow transfer because I mean having a combination of people working with the ultimate support that needs to be probably they are super busy to incorporate the new knowhow and they same company with certain centers of excellence or pioneers who are beginning with a new wave of technologies because we are in an environment where even in a one specific topic like cloud native, the pace of evolution is super fast.
(12:44):
So being up to date, they know how that you have three, four years ago today is a bit obsolete to some extent. So you need to refresh it and you don't have probably personal bandwidth, but at company level and a group level, that's something that can be definitely accommodated and I think that we are being quite successful on that approach.
Guy Daniels, TelecomTV (13:08):
Fantastic. That is good to hear. Thanks very much indeed. And Andrew, we'll come across to you as well on bridging the skills gap.
Andrew Douglas, Pure Storage (13:15):
Yeah, I mean I'll just be quite brief. I think we've covered a lot of topics so far. I mean what I would say I think Juan picked upon this on this point is especially for operators, there's always been a general push towards certainly in the network of a lot of self engineering, a lot of teams building these very, very complex things that they do and they are very complex things, but I think a move to cloud native as a particular type of skillset and experiences, and there are so many organizations out there that are doing this across different industries but also within the telecom space that are geared to providing these types of resources to, they've undertaken these huge integration projects, which generally, which these are and integration projects in regards to many different suppliers, many different technologies. Someone touched upon it before, not only that, but all the kind of business processes that sit around that. So in brief, the market and across different sectors sector has delivered this many, many times and I would strongly suggest an urge and where I've seen success with the operators is where they've not only embraced the open ecosystem of technologies, but also embrace the open ecosystem of partners who can deliver these types of programs.
Guy Daniels, TelecomTV (14:41):
Yeah, thanks very much indeed, Andrew. Well thanks everyone for those comments to our opening question. We are getting a lot of questions in from viewers, so let's move swiftly onto our next one. And this is all about security. How is security by design and zero trust being embedded into cloud native operations? There's a good one, Sean, are you able to start us off?
Sean Cohen, Red Hat (15:08):
Yes. So maybe let's unpack first what zero trust means. There's actually seven layers to zero trust. It starts all the way from the user goes to the network and equipment, data applications, all the way to the automation and orchestration into the analytics part. It's a really multi-layer and you have to have a strategy to address all these layers. Red hat strategy is actually focused on the never trust, always verify the good known list privilege axi. So you have to embed it right from the start. It means that actively applying zero trust principles into your operations of a cloud native. For example, in OpenShift network security, we're moving away from the long lived API token to more short-lived use cases based once basically to reduce the exposure windows of attacks. Security is all about multi-layers and obviously historically we're protecting the parameter, right, firewalls and so forth.
(16:11):
Right now it's down to a specific transaction and specific authentication that has to be very short-lived and die right after. Another key aspect is the secure management, all of the confidential data such as credentials, TLS, certificates and so forth. In Kubernetes and OpenShift specifically, we have Secret Store CSI that allows you to deliver secrets as a thermal volumes, right? We also leverage external secrets operator. We integrate with strong tools like Harshi Corp Vault that helps you manage the secret storage but also rotation and encryption and so forth. And then we also address other layers such as the secure and workloads enabling detecting of threats during the operation itself where we have tools such as advanced cluster security that allows you to do runtime and focus on specifically on network and network isolation, threat detection, each one of those layers. Basically today you can even use network privileges enforcement looking with EBPF, which is very lightweight and embedded in the kernel.
(17:27):
And finally, something we introduced recently in our latest versions of OpenStax services and OpenShift 18, it's the live kernel patching. Think about all the times where you had to plan your maintenance window. We have a lot of cvs, a lot of security fixes you have to apply that touches the kernel that forces you to do the reboots of the now with this feature you can actually apply those live to the kernel. So it really saves a lot of operational window but actually addresses that where we started with minimizing the attack surface and the exposure time by actually applying faster. Because one of the things we keep talking about even in this panel is the shift towards ai, native ai, but with ai there's also other side to the coin which is we see a surge in the number of attacks which are oriented through AI power attacks, right? And they're becoming more much faster and sophisticated and in fact what we're seeing from A CVE surge is by the end of this year, by the end of 25, it's going to reach about 300,000 common balloon abilities and exposures that you have to address. So having a feature like live carnal patching is critical to allow service spread to address. But again, as I studied in the beginning, it's a layered approach to zero trust and you have to apply basically a multi-level approach to address it.
Guy Daniels, TelecomTV (19:00):
Great, thanks very much Sean. Layered approach there. Paul, let's come across to you next your thoughts on security, but security by design and zero trust.
Paul Miller, Wind River (19:10):
Yeah, thanks Guy and obviously a great question and frankly an incredibly complex one you can see from Sean's answer is very much agree with that. I think the thing I'd add is a bit of a broad perspective, which is we tend to view security not as a component or an add-on, but as an architectural principle intrinsically woven throughout all the different elements you have in a cloud-native solution, that being the CAS layer and importantly analytics and orchestration and automation together as a complete zero trust system, integrating security by design practices and zero trust principles as we're talking about here, that happens across the entire life cycle. It's not just in the product we ship, it also has to be during design and then during deployment and then during runtime operations. All of those three phases are critical to make things successful. So how do you start, right?
(20:02):
You start with a hard and cast foundation that has secure boot, it has kernel hardening, mandatory access controls. Make sure that you certify and support those controls in production environments during development. You've got to make sure you've got to secure CI CD pipeline. You've got to make sure your software builds pass through signed image pipelines, they get vulnerability scanned, they have compliance checks to ensure that only verified software enters the customer environment. Of course you need to leverage the technology. So where Kubernetes comes in and gives you multi-tenancy and namespace isolation, make sure that that enables your workloads to be contained and privileged. Boundaries are enforced as you look more towards operation. Rather than design than design, you've got to make sure every API call workload, every operator action is authenticated and authorized. No implicit trust inside the system. Zero trust doesn't mean trust nothing.
(21:00):
It really means full authentication, it's authorization so that then that user is enabled into the system for that action. Make sure that communications are encrypted east to west odd to pod and north south external traffic. It's got to be encrypted by default using things like TLS and strong certificate management and renewal, you've got to do continuous posture monitoring. You've got to do dynamic policy enforcement, making sure that you have analytics in the solution, not just the platform. So you can ingest telemetry from all the different nodes that are deployed, the clusters and applications that allows the user to surface an anomalies, watch out for things like predictive analytics that can notify things problems before they happen and indicate potential intrusions in real time. You don't have analytics, you don't have visibility into your deployed system. Of course, bringing that to a closed loop ability so that you can fully automate remediation, perform auto patching, trigger node quarantines, this sort of thing that becomes incredibly important.
(22:05):
So an incredibly important topic we're talking about here, especially as service providers are deploying, we're now looking at things like open ran where there's tens of thousands of sites that are managed into a single distributed cloud. The threat surface, the CVE threat surface of that footprint is incredibly large. So these aren't abstract concepts. You've got to implement cloud native operations through secure foundations through a completely integrated stack and that ensures our customers can trust their deployed environments and operate them on a day-to-day basis without risks of the various threats that cybersecurity issues can bring to the forefront. So a pretty complex topic, but it is solvable, but it requires a lot of effort and capabilities.
Guy Daniels, TelecomTV (22:49):
Thanks very much, Paul. Yes, it is complex and a lot to unpack here and I think it's worthy of a panel in its own right, so maybe next year we shall do that. Carlos, I'd like to bring you into the conversation. What are your thoughts on this?
Carlos Torrenti, Rakuten Symphony (23:02):
Yeah, I fully agree with what has been said so far about the security being a multilayer approach and being sort of embedded into the full cycle. In the case of Rakuten, it's maybe a peculiar situation because we are both a technology company and at the same time we're an operator. So part of our processes, they are really secure to make sure that whatever technology we develop gets securely delivered to our customers, including our own customers. And that means that things like ci cd processes, things like GIS policies, infrastructure as code, all those things have to be really embedded into our development process from the start, from the moment that we envision to do some piece of software until the point they are delivered to the network and then on day two while they're managed, right? So it's really important to manage things and include things in the delivery processes as vendors, things like security scanning, tools, dependency scanning tools, static analysis, all those kind of tools they need to be really part of your delivery process as a technology vendor.
(24:17):
So that one, those things get delivered into the customer, they really secure and you can make sure that it has gone through all those steps in the delivery chain. And maybe to bring some angle that hasn't been mentioned before, I think security in some aspects is preventive, but in some other aspects it needs to be reactive as well because it has been mentioned so far, but we are increasingly seeing different attacks and we're seeing that malicious actors can eventually break through into the network. So it's really important to have the right tools to react and there's different angles to that, right? On one side you have to have the observability tools to make sure that whenever there is an incident you can quickly react. And if you have those closed loop automation capabilities that especially for example in the case of Rakuten, we have built into our cloud OSS tools to make sure that any incidents can be as automatically managed as possible, then that saves you possible, really impacts on your network.
(25:30):
And the other aspect that I think is important for that sort of day two ongoing operational status is data, right? You have to iterate the right capabilities in your data layer to make sure that you can configure those policies across your entire deployment to make sure that anything that happens in your network, you're secure because you have those capabilities on your data layer to provide those backups, those quick restore capabilities of your applications to make sure that if an incident happens, you can restore it to the normal state in the shortest time possible and make sure that your customers don't get impacted. So I think those are really important components of any end-to-end security approach that you will have in a cloud native environment.
Guy Daniels, TelecomTV (26:20):
Carlos, thank you very much indeed. A lot of good advice there from everyone. Thanks for those responses. Next question here, question reads out is a long-term strategy of cloudification digitalization and AI predicated on a cloud to network convergence and how does cloud native assist with such a roadmap? Well, Joan, can I perhaps come across to you, get your take on this question and answer this question for a viewer?
Joan Triay, DOCOMO Euro Labs & ETSI (26:54):
Yeah, sure. Okay. First part of the question, I would say the answer will be yes, but actually I like a lot the question as well because I see that this question now is also resonating even outside our domain of people and teams that are more dedicated to the cloud native cloudification and so on. As you may know, there are also activities now in the vision of the six G about in network computing and AI agents and so on. So I would say that this question is also very relevant because it's going to go full stack, not just only to what concerns to the platform, but actually to the services that we operated we will be able to deliver. I like the question as well because actually when I was discussing this with teams internally, I was explaining that in fact this transformation is not just something that it is being realized right now.
(27:59):
Actually this transformation already started when we did the first steps of the virtualization in the legacy systems. Basically we had the network separated completely from the compute and basically the network was just only the pipeline that the pipe to connect the users to the servers, to the compute services. That already started to change when we started to incorporate virtualization and cloudnative in our network to actually build the network. So it was the first step to actually integrate part of the compute but not part for the case of delivering service more from the perspective of actually helping on building the network. Now this is just only an intermediate state, but as soon as you start doing that, you come up to realize that the next step is actually the full conversions of compute network and this is where cloud native is going to play a key role because these technologies will help the cope with the other complexity that disaggregation of the network, the additional resources that need to be considered can be handled with AI and proper automation tools.
(29:14):
It also helps on actually bringing the necessary modularity to actually build all these conversions of compute and network and the additional services. And it also brings very nicely the conversions of using common tools to perform, for example, observability we can use nowadays and it's going to be still continue in the future, use the same tools to actually understand the telemetry, collect the telemetry from the compute, collect the telemetry from the network, and actually be able to process that data together. So this is one of the key benefits that cloud native technologies will help operators bring these full conversions.
Guy Daniels, TelecomTV (30:02):
Thank you very much indeed for addressing that question and we've got some of the comments on this one. Let's get some of the brief comments from Francisco-Javier first.
Francisco-Javier Ramon Salguero, Telefonica & ETSI (30:12):
Yeah, I think that this is an excellent question to comment on the topic of the IT network convergence and I tend to agree that having a strategy for that avoid the silos, the traditional silos between the IT and cloud environments and so separated NFV like environment or TechCo cloud environment besides the use technical requirements that they related to special IO intensive workloads, special connectivity and so on. I mean, the more than you can convert on common techniques, the better for the age of the operation for benefiting on the waves of innovation that are happening in this space. So the less special that you are and the more that you can incorporate best practices and innovations that are coming from other industries, the better. I mean, we have discussed many topics here like cicd, ra, et cetera. There are not anything that come having generated from this industry or even the cloud native concepts. However, we are benefiting for us for our own deployments. It doesn't mean that we shouldn't be careful on identifying the peculiarities for ity, the war laws for things that are subject to some regulation et cetera. But more than we can go from continuum of clouds or either private or public or a mix of for regular IT workloads and the co workloads the better because it may make our lives easier and we allow us to work at a much faster pace.
Guy Daniels, TelecomTV (32:06):
Great. Thanks very much. Francisco-Javier and Sean, can we get some brief comments from you on this question as well?
Sean Cohen, Red Hat (32:13):
Yeah, I want to frame it in one sentence, right? Cloud native is the foundation for becoming AI native operator without the agility that all of my colleagues actually mentioned earlier. You cannot actually move to the next level, right? Cloud native is foundational for building agile networks and this agility is essential for you to introduce AI driven applications and obviously innovate faster, deliver services faster to answer market demands. And again, it's a whole range of spectrum of areas you can influence. Carlos mentioned earlier the observability piece, which is key. AI brings the capabilities today for us to actually know better what's going on, right? If it's real normally detection, it's through predictive maintenance you can do that can also impact your costs. So it's not just nice to have, it's this is how you become efficient in the future leveraging this set of tools. So other side of the coin is like service provider has all this data already.
(33:25):
If you think about what you need for AI native operator, you need the data. And one of the problems that I see already now with customers like standardizing the data because you have so many different components and different vendors in your network that each one of them has their own schema. So for you even to get the fruits of being an I native operator, once you employ cloud native as that foundation, you also have a legwork to do which is standardizing the data. And then as I pointed out, it goes back to ability to unlock new revenue streams basically through the advance of a lot of things we mentioned earlier, if it's the 5G capabilities, if it's the new APIs, opportunities that are coming as well with AI becoming AI native operator. So to summarize for me, cloud native is just the foundation, this is the prerequisite for you to move to the next level.
Guy Daniels, TelecomTV (34:21):
Great, thank you very much Sean. And we are continuing to get a lot of questions in too many questions too address in this short program. We've got through three questions so far. I would like to point out though that we just had one comment in to say, surely networks at cloud will diverge, not converge and maybe that's something we can pick up on later after this program. But I do want to check in now on our audience poll for this year's cloud native Telco summit. And the question we are asking this week is cloud native can deliver substantial benefits to telcos, but what are the main challenges to its adoption? And right behind me you can see the real time votes that we have received so far and it does look like the main challenge so far looks like integration with legacy systems and functions, but that's the main challenge identified so far. We've still got time to go on this poll. If you've yet to vote, then please do so. We will take a final look at the voting during tomorrow's live q and a show. We have more time for more questions. We do have to speed through some of these, so let's get onto our next one as quickly as we can. This is from one of our viewers who asks what role do API first strategies play in accelerating cloud native projects? Very interesting, very topical. Andrew, do you want to have a stab at this one?
Andrew Douglas, Pure Storage (35:59):
Yeah, I'll have a go. Good timing because it links to the poll that you just shown there. I think the biggest, the highest percentage, I think it's 60 odd percent was around legacy integration. The complexities with that complex. So here's the good news. All of us on this panel, all of us who are working as a supplier, we're all focused on, all focused on looking at what APIs are embedded as standard into what we we do. It's not an if, it's definitely, it's how we're going to do that into everything that we do. And really for me it's summarized in open integration. We've been talking for so long over the last couple of years. Every event we go to, I think even some of the themes last year here were clearly around openness and open ran openness in the network in the core and what we do.
(37:01):
So it's very, very important and really the way we do that, and we've always done that in technology, is through how we can adopt and expose our APIs. What I would say is certainly in a cloud native world, APIs are really how we connect microservices. They enable us to develop faster, they enable us to, for us as suppliers on this call, for all of us to work and ensure that we're open and we're sharing and we're able to interconnect with a very, very complex ecosystem. They certainly enable automation at scale and automation certainly in the network and for the operators who you live and breathe this every single day, that's of critical, critical importance certainly as we start to modernize, modernize their networks moving further and develop the as a service, which really is the nirvana that most operators are trying to move to. But the one thing I would say, which is interesting to me is that if you look at core enterprise IT that I've been doing this for many, many years, that's a very much an open ecosystem of different partners.
(38:29):
It's certainly an open ecosystem of different technologies at different levels and really those microservices and those APIs help to string it and bridge it all together. I think now as we're starting to see a lot more of Kubernetes coming to the enterprise scale within the core and with even in the ran, you're starting to see that actually what we've learned and blueprinted for many, many years in the traditional enterprise IT space has now shifted really into the core. And I think the acceleration of what's happened over the last couple of years has been almost monumental in terms of the speed and the patterns of changes. And certainly here at Pure and Port Works what we, a fundamental part of everything that we do is that open ecosystem and ensuring APIs are upfront, front and center and also it means what we're able to work with our partners who are on this call. So I would conclude with that open integration is absolutely one of the core principles to delivering cloud native services.
Guy Daniels, TelecomTV (39:50):
Thanks very much indeed, Andrew, and we'll go across to Paul as well. Your thoughts on how an API first strategy can help accelerate cloud native projects.
Paul Miller, Wind River (39:59):
Yeah, thanks Kay. I think this is pretty interesting because think about what a cloud native system is when it actually gets deployed. You have a hardware layer, you have its set of APIs, you have virtualization and operating system layer that sits on top of that, the whole host of components, cloud-based networking infrastructure, the CAS layer storage systems, the entire infrastructure layer is incredibly complex with many different APIs. Then you bring the application layer in the actual case of telecommunications, telco solutions, IMS, core 5G, core UPF cus, O-S-S-B-S-S components. I mean literally hundreds of applications are being brought together. Nevermind the IT enterprise back office now how are you going to manage this thing? How are you going to make this thing simple to manage? Going back to our first question where we talked about the skills deficit that we have in this industry, the simple answer is really APIs and automation, right?
(40:56):
It's really not possible for a single human being to wrap their head around the complexity of these network systems that are being deployed if not for APIs, right? So APIs extending northbound out of all these systems into automation platforms really presents an exciting future for us. So having the right automation and orchestration solutions so that you can perform lifecycle management, you can have analytics where you're detecting events as they occur and even closed loop automation capabilities where you can correct those problems. And we also demonstrated at this year's mobile World Congress bringing AI into the picture where you have gentech AI control of APIs and use that as an operational tool to simplify the task of figuring out where problems are across these disparate multi-vendor systems. So I think API is really the key towards operational management and ease of use, particularly as you get these systems to scale. Without the rich API extensions and supports that are being built across the entire industry, we'd have really no hope of adoption of cloud native infrastructure.
Guy Daniels, TelecomTV (41:59):
Thanks very much, Paul. That's great. And just to remind you, as we do have a couple of API programs coming up very soon on telecom TV and it's going to be one of the debates at this year's great telco debate, right? I'm determined to get in a couple more questions before the end of the show. We haven't got that long, but here's another question we've got from the audience. The question is, you talked earlier about skills, but how can operators successfully manage the cultural and operational shift to DevOps and a cloud native way of working? So more about culture and operations here. Carlos, can we come across to you for this one?
Carlos Torrenti, Rakuten Symphony (42:38):
Yeah, sure Guy. Well, I think this is really related to the previous questions about skill and there's definitely a people and organization transformation. I think one of the things that wasn't mentioned, and I think it's the pillar of a transformation in any company, is that you have to have that executive level approval of that sort of thinking within your company and having that sort of backup is going to be key because there's going to be challenges around the process and that transformation is not going to be an easy process. Sometimes you're going to find that there are hurdles in the way you are applying things, there are things that maybe were hidden during the transformation or that you didn't expect that in a particular way when migrated to cloud native, like having legacy functions that are difficult to migrate. So having that support is really important across the organization and once that support from the top level is there, then you have to make sure that it goes through to your organization as a whole.
(43:45):
In terms of all the things that have been mentioned by the panelists in the previous answer like training, informal training, I think having also the opportunity to experiment. I think cloudnative technologies allow us to the ability to deploy things in production and in labs in the same manner because everything is containerized and everything is really portable, so it's a very good opportunity for us to let the people experiment with new technologies and it's the time about AI, about building those genis that can span multiple process across the organization. I think these quantitative technologies and the fact that you can deploy a lab in seconds or minutes instead of having to wait for weeks, it's really important for us to innovate and to translate that sort of DevOps culture into the teams. And I think I would agree with, I think it was Paul at the beginning that said that our mission as vendors is to facilitate, not everyone in the company has to be a Kubernetes expert or A-C-I-C-D developer.
(45:04):
Our mission is to provide the right tools for the people in terms of the integration of the APIs, the ability to do those simple GUIs to define policies that are portable across environments and that can be repeated so that not everyone in the organization has to be an expert. And I'll give you an example. When we define policies for our data and our replication and our security, the DevOps team and the CICD team works with our security team and the security team doesn't have an idea of exactly how those policies we implemented but will be implemented, but they're able to define exactly what they want and then those policies can be translated into files that can consistently be applied across multiple environments thanks to the capabilities of the solution. So these sort of things, I think they're really important when you try to infuse that company culture across these DevOps philosophy.
Guy Daniels, TelecomTV (46:11):
Great. Carlos, thank you very much indeed. And Sean, did you want to come in on this one about how we can develop the cultural change required? It's a topic we've been listening, it's a question we've had for a number of years now and there's a lot of interest in this, how we facilitate this cultural shift.
Sean Cohen, Red Hat (46:30):
So I'm going to pick up where Carlos finished. Not everyone is an expert in the organization, so it's critically important to promote skill transfer as well as established cross-functional teams. So imagine you have already talent around a DevOps. Not everybody is an expert, but those experts embedded in the team with teams that are not experts create that cross-functional team. So you can encourage the direct skill transfer as you work. Obviously I mentioned earlier one of the practices is to embed consulting engagements so you can bring expert to the team even externally and have them hold your hand until you're ready, as I said earlier. So this is really successful practices that we see that really work and help what again, it's all about the mentality. We talk about culture. Culture is different. You have to understand that if you look at the history of service providers, we're like cables and now we are moving from cables to code and from code to APIs in ai.
(47:30):
So it's a big shift we're going through as an industry and I think having that standardization also on the tooling is also key. So it's one thing to do this, promote the skill transfer as I said earlier, but you also want to standardize the tool and earlier we talked about horizontal cloud native architecture, the benefit of actually adopting an horizontal cloud native architecture allows you to standardize on the tool so you have the same way that you're doing processes like cloud onboarding and automation pipeline and so forth. So when your developer needs to do the context switch to move to another project, they don't need to learn everything from scratch as they move to a completely different set of tools, they're actually using the same standard tools that are horizontal in the cloud native that actually results in helping them to drive faster services but also bridge the gap of a skills gap much faster because it's like less toil you adding on them.
(48:29):
Last point is streaming the operations model by creating, and it goes back to the legacy poll question that we saw is like by creating a unified infrastructure, which is cloud native, both right 4G, 5G, you can actually improve the workloads and culture as you adopt DevOps and SRE principles because you streamline the processes, you use the same tooling to manage bare metal, to do observability and so forth, and as well as reduce the overall complexity. So to summarize skill transfer as cross-functional teams standardizing on the tooling is key and then get the benefits of the unified infrastructure that cloud natives brings to improve your operational models.
Guy Daniels, TelecomTV (49:21):
Fantastic, thanks very much Sean. And we're going to go across to poll as well for thoughts on the cultural shift.
Paul Miller, Wind River (49:27):
Yeah, just a quick point I wanted to make. We've seen how our customers tend to operate. The service provider is largely in a plan build and operate model, and I think that really the only problem that we see culture is when you have the classical telco expertise colliding with the IT expertise, right? We're obviously moving to IP networking container Kubernetes type environments that's really driven from enterprise technology now entering the telco workspace. So you get a little bit of friction there, but I think and maybe a bit frictional comment here, but we may be over rotating a bit and not appreciating how far our customers have actually come along. In my experience working globally with many different carriers, they actually largely understand cloud native, they understand cloud. This is very different than what it was years ago when NFV first came out and we had a move from proprietary appliances to cloud-based technology, that was a massive mind shift, a massive cultural shift to bring IT expertise.
(50:27):
The telco we're through that now and certainly yes, sure CICD and DevOps and maybe the more cutting edge pieces of this, they're still learning, but they're not that far behind us, right? They're actually pretty well advanced and pretty well understanding of all these technology components and how they're used. I think the area that they're really lacking in is the complexities of adapting cloud native are not really from that. They're from the difficulty of integrating multi-vendor solutions, right? As soon as you go to a cloud native or a cloud-based technology, you're now taking technology from hardware vendors that are separate from infrastructure vendors that are separate from application vendors. How do you get that solution integrated and deployed successfully? That is a cloud native deployment. What it looks like it's not coming vertically integrated from a single solution provider in most cases. That's what's creating the friction around adoption of cloud native, not necessarily the expertise on containers or CICD and operators are afraid of the cost and complexity and risk of doing that sort of thing. So I do think that finding vendors that have great partnerships that have solved a lot of that problems that ease the adoption of cloud native technology into a service provider is really the key to kind of unblocking the cultural shift here to ensure the cloud native solutions move forward.
Guy Daniels, TelecomTV (51:51):
Paul, thanks very much indeed. And Joan will come down to you and get your feedback on this one as well.
Joan Triay, DOCOMO Euro Labs & ETSI (51:58):
Thank you. I'll touch also briefly on the point that Paul introduced. By the way, I also explained that from our perspective, organizational changes are also important towards this cloud native path, but one of the points that is maybe sometimes quite difficult, and it's actually a debate of discussions even internally in the company because as we know network operators, we need to keep with the SLAs to keep the network running at all costs. But one of the aspects that is actually difficult and probably most important one I would say at this point is changing the mindset of the risk avoidance that operators typically have had in the past. Basically if something worse, don't touch it. This does not work anymore. Now in the cloud native area, the network is now heavily desegregated, horizontally, vertically, there are many more components needed to be managed. It is a good thing because we can build our solutions on a multi-vendor environment with the best of bridge solutions, but it also brings complexity. That also means, as I said, that the disaggregation introduces many more components. Eventually there will always be something that needs to be patched, that needs to be updated and there is always a need to make some change. And if we are afraid to actually introduce a change because we have a mentality of don't touch it, then of course the transition if not going to be successful. So yeah, I wanted to add this point because maybe it was not touched by other panelists.
Guy Daniels, TelecomTV (53:50):
No, Joan, thank you very much for doing that. Thanks everyone for those responses. We've had a number of great questions come in that we do not have time for today, and so we will be using those on tomorrow's show, but we can squeeze in one more very quick question and Francisco-Javier, I'm going to come across to you on this one because I think you might be ideally suited to answer this one, and the question is, which cloud native orchestration frameworks best meet telco requirements? Do you think you could help the viewer with this one?
Francisco-Javier Ramon Salguero, Telefonica & ETSI (54:21):
Yeah, I'll try to help. Partially, as you know, I'm chair of essential and I've been biased, so I'll try to take it aside. I have my own preferences in the topic and the point is that from in that viewpoint, I mean orchestrator is an overload the word even in the cloud native space because I mean some are referring to some passive as systems that expose on APIs. Some are referring to some infrastructure as code or infrastructure as data approaches some means for describing complex processes very simply and in flat manifest. So that is some sort of orchestration for some people in industry and there some debate on whether we are evolving from the traditional infrastructure code to something that is more, that is infrastructure as a data framework like the cross planes of the world and copies and all the like, but there's no uniform solution there.
(55:32):
Then we have the own CE operators that in a sense create an orchestration because in the end coordinate components and watch components and somehow abstract A complex operation is something that seems to be simple from a uses V point and there we have the to somehow also to leaders. I mean the discussion is whether you prefer flu CD or ROCD depending on the use cases. Which one reconciles the resources in the manner that you expect. If we talk about purely Kubernetes environments and cluster management, then we have some people are referring to orchestrator and we have heard in the panel as well like platforms that are managing a fleet of clusters either by one single vendor typically with obviously is the most easy starting point. There are no clear winners when it comes to multi-class managers, although there are partial wins and emerging for, I mean through multi technology cluster managers.
(56:48):
There are a few of them already in the market, but probably better suited for public cloud, not so into the private cloud. So there is clearly a gap there and also, and probably this is if you go larger sample on the stack, that is what I'm trying to do. I mean lastly you refer to the telco, probably your meaning what is the role of traditional telco orchestrators in here? You have an environment that is likely to sell, heal, to be described ampu to clusters and elements that are self reconciling. What is the role of orchestration? I mean obviously there's not actually a role for orchestration in the traditional sense, which you have a complex choreography of steps that you need to coordinate and put the glue in the middle of it and wait for completion and so on. I mean, that imperative type of frameworks, partially the cloud native is coming to alleviate the workload that they support and in the end, but I believe that coordinating the intent and declaration is where the assistance in that is where the new way of orchestrators will focus in the near future.
Guy Daniels, TelecomTV (58:07):
Fantastic. There's a lot to unpack there as well. Francisco-Javier, thank you very much. Unfortunately, we are out of time for the show now. We are almost at the hour mark. 59 minutes the show's been running for, it's fantastic. Thank you so much to all of our guests who joined us for this live program and do remember to send in your questions for tomorrow's live q and a show as soon as you can. Don't leave it too late. As I mentioned, we've already got a number of questions held over from this show. We're going to try and use those and prioritize those and take part in the poll. There is still time for you to have your say. You can find the full agenda for day two of the summit on the telecom TV website. It includes a panel discussion on how telcos can develop operational excellence as they scale their cloud native networks and processes. Remember, you can watch that on demand from tomorrow morning and for our viewers watching live. In case you missed today's earlier panel discussion, we are going to broadcast it in just a few minutes, so don't go away. We'll be back tomorrow with our final live q and a show, same time, same place. Until then, thank you and goodbye.
Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.
Panel Discussion
This live Q&A show was broadcast at the end of day one of the Cloud-Native Telco Summit. TelecomTV’s Guy Daniels was joined by industry guest panellists for this question and answer session. Among the questions raised by our audience were:
- How do you bridge the cloud-native skills gap to maintain progress and ensure continuous innovation?
- How is security-by-design and zero-trust being embedded into cloud-native operations?
- Is a long-term strategy of cloudification, digitalisation and AI predicated on a cloud-to-network convergence? And how does cloud native assist with such a roadmap?
- What role do API-first strategies play in accelerating cloud-native projects?
- How can operators successfully manage the cultural and operational shift to DevOps and a cloud-native way of working?
- Which cloud-native orchestration frameworks best meet telco requirements?
First Broadcast Live: September 2025
Participants
Andrew Douglas
Senior Director, Global Telco Lead, Pure Storage
Carlos Torrentí
Presales Solution Architect, Cloud, Rakuten Symphony
Francisco-Javier Ramón Salguero
Multicloud Tools Manager, Telefónica, Chair of ETSI Open-Source MANO (OSM)
Dr. Joan Triay
Deputy Director & Network Architect, DOCOMO Communications Lab. Europe (DOCOMO Euro-Labs), Rapporteur, ETSI ISG NFV
Paul Miller
Chief Technology Officer, Wind River
Sean Cohen
Director of Product Management, Hybrid Platforms, Red Hat