New cloud architectures: Private clouds and data lakes

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/yA3LLfnrMmg?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Guy Daniels, TelecomTV (00:23):
Hello, you are watching the NextGen Digital Infra Summit, part of our year round DSP Leaders Coverage. I'm Guy Daniels and today's discussion looks at new cloud architectures, private clouds and data lakes. So after a period of all in public cloud enthusiasm, many telcos are pulling certain workloads back. OnPrem and modern operator hosted private clouds now look and feel like public clouds containerized and API first. But what blueprints, data strategies and operational systems are required to run them well, I'm delighted to say that joining me on the program to discuss these issues are Vivek Chadha, who is SVP, global Sales Head Cloud and GM Rakuten Symphony, MEA, Dario Sabella Chairman, ETSI MEC and VP xFlow Software Technology, Diego Lopez, senior Technology expert, Telefonica and ETSI fellow; Joan Triay, who's Deputy director and network architect at Docomo EuroLabs and Rapporteur of the ETSI, ISG NFV and Mark Gibson, VP software at Connectivity. Well, hello everyone. It's good to see you all and thanks so much for taking part today. And first of all, I'd like to ask what practical steps can help telcos preserve the speed and flexibility of a cloud native operating model once workloads are on-prem and private clouds? Vivek, can I start with you because you and I often have this conversation about public clouds, private and cloud native. So what's your take?

Vivek Chadha, Rakuten Symphony MEA (02:15):
So great to be here guy, and I think that's a fantastic question to start the conversation today. If I can, I'd like to sum it up in the four Cs, which is that organizations who are on the process of journey of moving from public cloud to on-prem, if they want the same kind of user experience or feel at the enterprise level, the four Cs that they need to keep in mind are capacity. So when you provision your on-prem cloud environment, you have to do a reasonable job of understanding what your means, spikes, burst capacity needs, et cetera are, and by no means do you have to limit that to on-prem. So a lot of successful transitions do utilize public cloud for certain workloads for burst capacity, but they're starting to bring in their mission critical workloads, especially the ones which are sensitive to data et cetera, on-prem.

(03:07):
So that's the first C, which is capacity. The second is around control. You can have the most accurate capacity forecasting and modeling and build your on-prem environment including using third party data centers, but if you don't have a good control layer or mechanism of guardrails, if you leave developers amok, they often tend to consume a lot of resources which might not necessarily be a good ROI on investment. So you need a very disciplined set of guardrails and control mechanisms in place to ensure that your cloud capacity is being used in a robust and a reasonably governed manner. The third CS on competencies, public cloud has had a couple of impacts on a lot of organizations when it comes to IT and digital skillsets, one of them being that for the most part you could tend to outsource a lot of the heavy lift to the hyperscalers or the public cloud providers, but a lot of organizations still want to retain control and not have too much technical debt or dependency on their vendors.

(04:11):
So the competency question, especially if you're doing cloud native is how well your organization adapts to CICD principles, microservices, Kubernetes, et cetera, and do they really have a robust understanding of what it would take not just from a technical skills perspective but from a process and an overview point of view to integrate this new way of working when you move to on-prem. And the last one is around culture. It's very different going to a public cloud account and spinning up a couple of machines or just activating a certain set of services in your public scale, a hyperscaler account, but it's very different when you want to do it. And that culture is both around predictability, but it's also about conscious choices on how you're going to make investments in certain elements of technology because within reason no customer that at least we've come across wants a full-blown replica of a major hyperscaler running, it's too much firepower to solve what are relatively defined problems. So I would say in my world, those four Cs are a fairly good framework in which you can have a fairly predictable controlled, but a manageable way to get the best of public cloud but all the control of on-prem

Guy Daniels, TelecomTV (05:30):
Thanks, Vivek. That's great. That's a really good checklist there for everyone to go down and as I say, establish a framework of decision making and I really do hope we have more time to get onto the question of culture towards the end of panel. Well, I think we should extend this topic because Vivek, you mentioned there about the split between public and private workloads. So I'd like to ask, and Dario, maybe I come to you first, which workloads and services still make sense to keep on public cloud platforms and how do you avoid lock-in when you are leveraging those capabilities?

Dario Sabella, ETSI MEC & xFlow Research (06:08):
Thank you so much for the question. Yeah, I believe this is a very important question also to understand exactly which are the workloads that needs to be deployed depending on the needs and the convenience Also, this is a very important question. So actually there is a value in using of course any kind of cloud infrastructure including hyperscaler platforms. But of course, depending on the workload and how this is critical or maybe less critical from IT operation, this can be more suitable for deployments from a telco point of view. Also kind of repeat some reporting workloads and part of the core network if desired by the operators. We have seen also some examples recently of a collaboration between operators and hyperscalers in that perspective. But I'll also try to revert the question and think about what can be instead also the value vice versa of a private cloud or at the edge, right?

(07:17):
Putting maybe, and here you may apologize me because I'm talking also as a chairman of ETSI me. So MEC is multi access edge computing. So you may understand that you have taken, let's say, trying to explain what is the video of the edge and the private cloud. Actually the next standard I'm sharing this big group under the umbrella of ETSI standardization is actually introducing standardized the architecture and framework and APIs for the edge. But of course deployment options are still a business choice. So we are back to your question. So again, reverting the question is what can be the value of a private cloud or the edge? Then when it comes, for example to real time workloads or the need to do some AI edge inference or some mission critical application or some kind of deal with the security and privacy of the data, then having a sovereign IT infrastructure cloud, remote cloud even on-prem can be a value when you also need to take care about the privacy of the data of the users.

(08:34):
So you see it's a kind of choice depending on the workload and the convenience sometimes also regarding the regulation. So answering to the last part of your question, when you asked me about avoiding the locking, yes, this is really a pain in the industry and I believe the answer from my perspective as a chairman of a standardization body, yes, answer is clear, right? So we need standards to unlock the market and allow interoperability between different stakeholders. Here the standard is the solution. Of course in a standard group there can be multiple players, multiple companies pushing their solution, but whenever we find a consensus and a decision for the group itself and there is a common way to go and this is a universally language accepted by everybody for interoperability, that helps also avoiding this kind of lock-in, of course it's a delicate process done with many steps. So maybe the standards not as fast as we will like to have sometimes, but of course the process is guaranteeing kind of transparency and openness and also in the benefit of the interoperability. So I hope I answered your question also, let's say shedding a little bit of the light on what we are doing, the standard body, which I'm sharing,

Guy Daniels, TelecomTV (10:09):
Dario, thanks so much indeed. Yeah, that's great. And Joan, let's come across to you as well. This whole issue of does it still make sense to keep certain workloads on the public cloud? And as we've found recently or the past few years of running these summits, it's not always an either they're or choice. So what's your take?

DrJoan Triay, Docomo Euro Labs & ETSI (10:27):
Yes, certainly out the would actually answer in this way that there is no black or white case to consider. Even the cases that sometimes some applications might not be very suitable to be moved to the public cloud operators because of other requirements or use cases we might need to consider, for example, if we need to cover use cases of disaster recovery and we might need to actually put some workloads in the public cloud if we can actually warranty that we can keep availability of our services. Now when it comes to the applications or workloads that are not suitable, I think I agree very much what Dari already explained with examples, I would actually in my case, categorize them in three different categories. One is actually on the workloads that are very latency dependent. Of course these are not very suitable for the public cloud and those workloads that also imply quite use of bandwidth for egress and ingress.

(11:50):
This is also not suitable for the that in many cases that increases quite a lot the cost of using the resources on the public cloud. And the last category, I kind of group it into the case where there is in the workload quite a dependency on very specific types of resources that are very much coupled to the telco domain. The good benefit of using public cloud is to make use of the scale of the infrastructure that is already provided, not just only for one customers but for many different customers. So if there is something that is required to be customized just only for the telco domain, that makes it also not very suitable in terms of, for example, cost. Now if we keep up, keep out all these kind of workloads, which ones are the ones that are actually suitable for public cloud? And we do have examples these in the operators for example, workloads that are related to operations kind of O-S-S-B-S-S functions.

(13:07):
There's no requirement of latencies. There is also not big huge bandwidth usage in those applications. So these are functions that are actually quite suitable to be run off on the public domain, on public cloud, sorry, other applications and workloads that are very suitable. Consider for example, those that are very compute intensive, those that require a lot of compute resources might not be available to all operators to have a huge capacity. As Vivek mentioned, one of the good things to actually move workloads from the public cloud is to warranty capacity. In certain cases this is not possible. So still those resources that provide by the public cloud become kind of a good source of extra capacity for the operating in particular, as I mentioned for the compute intensive ones. And the last category of workloads that I believe that are suitable for public cloud is also those that are more transactional. So more like business ticketing, billing, CRM applications, this kind of applications. There is no problem at all I would say to be run on public cloud. Of course, as long as security is ED and the privacy of data is also warranted.

Guy Daniels, TelecomTV (14:33):
Great. Joanne, thank you. Thank you Joanne. Thank you very much for that. Well let's move on the conversation. Let's move away from the split between public and private cloud and let's get onto data. I really want to get a few points made on data in this particular program. I like to ask which architectural principles are proving most important when building modern data lakes at scale for both network and subscriber data? And Mark, let's come across to you. I think you've have some good insights here.

Mark Gibson, ConnectiviTree (15:07):
Yes, thank you for a great question and it's been interesting hearing the previous response as well to shed some light on where I want to take this conversation. So I've spent 15, 20 years in the OSS software space and every application I've built, the first question has always been how do I get data into this application? What are my downstreams telling me? And typically there are multiple versions of multiple old standards, in many case multiple APIs and to be able to understand what I'm receiving, I have to be able to translate them. So I've spent a career building adapters and building translation engines and building all sorts of ingestion processes to all sorts of downstream systems. And I just kind of accepted that that's how it was. The last couple of years I've started to look at the principles of data mesh architecture and what data mesh architecture does is really kind of invert that whole conversation.

(16:12):
It says if you are in a data domain, rather than sending your data and assuming the upstream system can ingest it, why not push it out in a format, in a pre-agreed format which everyone upstream can subscribe to and understand and pull in. And I think when you allied that to the fact that every single operator that I've ever spoken to has built Kafka queues between all of their operational software and so they have these queues with all this data sitting on it, but it's unstructured, it's whatever the application chose to emit is on that queue rather than being kind of linked up what data mesh principles and particularly the idea of a data product say that when you push data onto that queue, push it on in a pre-agreed format, push it on in a structured way so that anybody who subscribes to that already knows by the data contract what that data's going to look like when it comes off.

(17:13):
And if you do that, suddenly a lot of the things which have been complicated in the OSS space and by extension the data lake space suddenly become a lot easier suddenly, rather than having a data of unstructured data that everyone just goes to afterwards and says, well it's fine, I'll go fishing in it, I'll figure out what it is. It's in telco, there are only so many key points that you can lock data onto. So your ports subscriber information numbers are all really good key points and if you have that information in your data domain, you can quite easily push it into a common structure and then read back out of it. The other thing that I've been really struck by is how many of these data buses are then queryable by SQL? So if you now have this structured common data set between, say you're just doing it in the assurance space, you're just collecting your network alerts.

(18:15):
If they're all coming out in a common format, you can actually query the bus, you don't even really need to do any more. No, you don't have to wait for it to be ingested by an upstream system and then synced with another system which has got other data which is relevant to that decision-making process is kind of already there. So data mesh is still in its relative infancy, it's still starting to find its feet, but what it does do is it gives you that pre-structured and pre-organized data from multiple downstreams that could be 10, 15 years old but still operable. And I've seen a number of telco and operator architectures where they have got that they've got two or three generations of standards, but everything's still active and it's still operable. If at that data domain edge, that boundary, you can then push everything in a common format, then your upstreams suddenly become much easier to develop.

(19:11):
And actually then as a network and a telco operator, you can start to get towards, and this isn't saying that this delivers it immediately, but you start to move towards a common data platform across multiple domains or even just within a single domain, across multiple historic versions of what that domain sets out, which gives you that ability to start to, you'd start to innovate on the data rather than having to import into a vendor's platform and then develop on top of that you have the data yourself. And this is a question I've been asked across my career as a buyer of your software when I was a vendor, how do I get at my data and this start to become the answer? You start to define your own data structure and then use the applications for their specific application to go and do circuit design to go and do whatever it is to go and do your data analytics. But the actual data itself is kind of pre-structured. Now, again, I dunno that we have fully fledged answer to this, but I've worked with a few exciting companies in the last couple of years who are really starting to grapple with this and really starting to think about what it means in practice to deliver these data products and how to operationalize them. And I think it is a space that certainly I'd be interested in pursuing in the stance community.

Guy Daniels, TelecomTV (20:40):
Great. Mark, we'll follow this up later I'm sure. And then the whole idea of the common data platform, we've been hunting for quite some time. Diego, I'd like to bring you into the conversation now because I know you are doing a lot of work in this area too. What's your thoughts on the best architectural principles for managing the data at scale?

Diego Lopez, Telefonica & ETSI (21:02):
Well, basically we are have set some work precise standardization on trying to find mechanisms for building, well, we're using the terms open, distributed and trustworthy data infrastructures, taking into account that whatever you do, whenever you're talking about computing or whatever, we are talking about processing data used to, it was called data processing sometime ago, and I remember when I started dealing with it long time ago, one of the first things that you were expected to do was the data models that were applicable in any case. And if you think about IT networks where as well talk called data networks or data communication networks because essentially what we're moving is data. Now with the events of new mechanisms and closed loops and the Ben the of AI data has become more critical in the sense that becomes the essential fluid that moves around all the cycles, et cetera.

(22:17):
And what is important I believe, is that we managed to, the term that we have beginning to use is to productize data is the way in which whatever the data that is produced by whatever the source, whether it is a sensor, is the telemetry about a network equipment, billing data for customers, whatever the data that we produce can be shared with other parties in a way that is usable by the other party that is productive size. The example that a friend of mine use is always for this is that thing that you're producing cheese and you're producing big cheeses and if you decide to sell cheeses just as a whole, it's difficult that all the potential consumers are going to buy them, but if you sell, sorry, you sell the cheeses as a whole or half cheeses, a quarter of a cheese or a few cheese, cheese slices in a package that makes your cheese much more edible and much more attractive to the market.

(23:21):
The idea is to do or to go into defining mechanisms for doing this with data so we can offer the data that is available and smart was saying it's in the telco space, we can generate well and consume enormous amounts of data is that we do this in a way that can be used by the different potential consumers and produced in a way that can be productized in that way. This is one thing. The other thing that is equally important even or even more because we're talking about things that are regulated and relate to the privacy and in general rights of our customers is about we impose data governance, how we apply a proper data governance if we are productizing and making the data available, shareable across the network is how we can identify, define and enforce the proper policies for accessing data. Whether we are talking about humans or workloads or agents or whatever you call them that are acting on behalf of those humans and how we relate identities and how we establish the rights for accessing and processing the data.

Guy Daniels, TelecomTV (24:44):
Jager fascinating as well. It just occurred to me that there's some great work going on here, but still a lot more work needs to be done by the whole industry. John, I'm going to come to you. Additional thoughts from you on this question.

DrJoan Triay, Docomo Euro Labs & ETSI (25:00):
Yes, I have some additional remarks maybe to add. Mark pointed out the very important aspect to actually make sure that the data is well structured, but the fact is that as he mentioned as well many times where we get data from the sources of the data, there's a lot of legacy, there's a lot of different formats being used. It's quite difficult to actually assume that we can get the data into the data lake already. We specific structure or format that I think it's very important to emphasize that we should treat the data lakes not just only a collection of the data but also like an opportunity to transform that data and actually make it usable. Like what Diego mentioned, the importance is who is going to consume that data. So if the data has not been collected in the format that was expected or the format that is actually usable, the data lakes nowadays provide also functionalities that we can transform that data into a form that actually serves the business and the purpose for the operator.

(26:23):
And one other aspect here is also that data needs to also be considered reusability across different domains. So for example, in our case, it is very important to actually collect data in the access network network, make it usable already to perform certain operations within that domain, but at the same time the outputs of the operations, the aggregated data, that data needs to also actually be usable for other domains like the core or the service operations. So it's not just only the assumption that data will be consumed by one single entity, there will be different steps along the pipeline and each one of them might make use of the data and therefore it's important to make sure that we transform that data in different steps in different processes in the pipeline to make it usable for the end purposes. So think that it's important to not just only narrow down data lakes on collecting data and exposing the data, but actually the opportunity to make it usable by transforming it, by processing it in some way.

Guy Daniels, TelecomTV (27:49):
Absolutely. Joan, great points well made. Thank you very much and we'll come across to Vivek as well. Vivek additional points from you?

Vivek Chadha, Rakuten Symphony MEA (27:57):
Yes, I think I really like what I just said about not just treating data lakes as a dumping ground for data, if you put in the metaphor, but we also have the opportunity now we're at a point where there is actually a very interesting intersection of on-prem or private cloud environments and then the construct of data lakes or data Mars. Now where we see, for example, in Rakuten Symphony, we've built a data AI on-prem cloud environment, a platform which has exactly the Adapto drive was just talking about. But the approach that we are taking is that if we endlessly chase the perfect ontology of data across different departments, different domains of the telco, each domain itself has various versions of databases and different teams, et cetera. We will continuously be on this curve because Guy, you started the conversation by saying we've been chasing this for many, many years and I think this is a problem that will outlive a lot of us to be honest, because data has many sources and uses and intersections and evolutions.

(29:10):
So unless you try to restrict or almost prescribe a very strict regimen of data from cradle to grave, it's going to be almost impossible to dictate the formats of data over a lengthy period of time. Sure you can get it standardized or consolidated for a period of time, two, three years, et cetera, but on a 10 20 year horizon, even the nature of a data will change. So the approach that we've started taking in Rakuten Symphony is that rather than chase the perfection of data, let's chase the intent of the internal stakeholders, the technology stakeholders who want to use this data, the business stakeholders and the data AI cloud platform for example has data governance and data adapters built in which try to do something very similar to what we just said, not just bring data into the data lake but bring it in a format that is very, very quickly usable by upstream systems, et cetera to do the computation, to do the machine learning, et cetera. And I think maybe there is value in pursuing that approach in conjunction with standardizing datas and data measures, et cetera, rather than a completely binary approach, which is let's try and ensure data is always pure and consistent and standardized because that's going to be very hard to enforce even within a large organization, let alone across businesses.

Guy Daniels, TelecomTV (30:29):
Yeah. Vivek, thanks very much for this. This is a really great conversation. There's a lot of insights and ideas coming out of this. Diego, if I can come back to you because we've already touched on data use between domains and I'd just like to ask you, how do we ensure that data from different telco business units can get pulled together without turning into either a data swamp or data dump? As we've just heard from Vivek,

Diego Lopez, Telefonica & ETSI (30:55):
It's precisely avoiding that the ility pollutes the clean waters of the lake basically seriously, the solution that we are focusing mainly is about semantics. This is, I would say this is the other side of our data force. We are apart from considering all these aspects related to data governance and being sure who can access et cetera. And something that I forgot to say before and I will remark here is about, it's not only about who access the data but as well to remain sure of who has produced the data is about, we use the term provenance for referring to this in the case of avoiding this, that the diversity is a source of problems and lack of usability of the amounts of data we are exploring and we have some initial results that are extremely promising semantic models. The point is that you can have a strong agreement on the format and how you structure the data, how you use the data.

(32:12):
But the problem is that in many cases when you're talking about the format, when you are calling whatever the magnitude X of the data field y you maybe naming it in the same way into departments or of two groups, but they are pointing to something that is not exactly the same or even that they are even contradictory in some cases what they're trying to build these mechanisms. So you can share what you mean when you're talking about something and connect it with what is meant by others when they're talking about that something trying to look for equivalences and different degrees of abstraction and different, think about the hierarchy of categories and you're talking about a service service component, devices and elements within the devices, et cetera as a proof that this is something that is doable, et cetera. We are applying right now to the digital twins we are using for experimenting with changes or situations in our networks.

(33:29):
And what we have done is to build a consistent, consistent ontology semantic representation of how a network is so people can translate what they have they have in operation. So we can take that representation and transform it into something that is simplified because it's running on a virtual environment but reflects exactly they're intending to do. The results that we have so far are quite promising. We have reduced the time of building these twins and making the assessments. We have been required to do so and we are working in that direction and we're trying to extend these days within the TC data in ETSI, we are trying to define or extend methodology for this kind of semantic classification that has been successfully applied basically industrial and IOT environments. And the idea is to extend this methodology to a general case and in particular to the case of tele infrastructure.

Guy Daniels, TelecomTV (34:47):
Very interesting. Diego will look forward to updates on that later. Mark, let me bring you in for some additional thoughts here.

Mark Gibson, ConnectiviTree (34:57):
Yeah. In responding to this, I'm kind of reminded of a conversation I had when I was managing a network inventory product and we were looking at, we modeled the devices in racks and we could model artistic electric cables coming into them. And I said to our presales team, well look, why don't we think about extending our modeling to include power and to include heating? We could easily extend our model to do that. And the presales guy said, well, we could but we couldn't sell it because we don't have a domain expertise. We understand what network equipment looks like and what interconnect looks like, but we can't talk about power or heat in any real meaningful way. So if I think then about, well, okay, how do we build data lakes that aren't just unstructured swamps? Here are two super adjacent domains in the heart of any kind of network operators system and either one of them could be giving us an indication that there's a problem and we could be getting alarms and alerts from both domains in parallel from their respective domain owners.

(36:06):
But how do we cross correlate them? And equally how do we do that in a way that the upstream data scientist who is neither an expert in telecoms or in power or heating can then build a machine learning process to be able to look across all of these data sources and other adjacent data sources and start to see deeper patterns in outright outages now failures or other things which are kind of environmental. When that data is pushed into your common data set from each of those domains, it has to be scored or in some other way normalized as it enters so that we get equivalents so that I don't have to be an expert in heating or to understand where heating has got too much to know that actually this is okay, these two are okay and equivalent situations, but actually I can have up here two kind of ambers and maybe two reds up here and be able to link across them and to be able to then make those additional insights that we want to be able to do with our AI processing and machine learning.

(37:20):
So if you don't get this right at the point data is shared from a specific domain into a more general case, it's really hard to back in further because you then have to be upstream an expert in a particular downstream domain to make that inference, to make that decision. And I think this is one of the things that I've seen very recently in some conversations. That ability to within that individual domain to empower them to impose their smarts, their knowledge onto the domain as it exits their domain and is shared with other people is really important. And I think that's one of the things that you start to get with data products and then with overlying data products is you can then start to not only push this data out in real time with meaningful information on it, but you can then start to cross correlate in a common data stream and start apply rules in real time in those streams to be able to learned behavior back into the stream as the stream is progressing. And I think that's one of the really exciting things that data mesh and data products start to bring to the fore is that to normalize and empower those downstream data teams to be able to add that insight in semi real time.

Guy Daniels, TelecomTV (38:53):
Thanks very much Mark. Lots of tricky issues to sort out there. Well we've got a couple of minutes for one final question and I'd like to talk about culture and people because culture was one of the four Cs that Vivek mentioned at the start of this discussion. What organizational models successfully bridge data center facilities, teams and cloud native software groups? Duran, can we come across to you to start us off please?

DrJoan Triay, Docomo Euro Labs & ETSI (39:21):
Yeah, very good question. Yeah, people is also very important in our organization. So it's not just only about technology. Well actually maybe I'll could make reference to a success story at least in our company. What has worked out very well is actually understanding that the process of software is in the network is not narrowed down to any single specific domain of the network. And we have for example, successfully virtualized the whole court network and we are starting with the virtualization of the run and it's very important to actually go to the scale of software as in the whole network to build teams and departments that actually take full responsibility of preparing the platform for any kind of network function. That is what for example we have done at doomo. We reorganized the groups to actually make sure that there is a common infrastructure, a unified infrastructure for all the network domains, same methodologies being used across the different domains, avoiding silos of infrastructure, avoiding silos of management stacks that is achievable because we have a common team that is actually preparing all these platform for as mentioned all domains of the network.

(41:09):
And actually in the process of this transformation that is fostered by the team, it's very important to highlight also the importance of the engineers in the teams to also be trained and get the necessary skills of applying technologies that is actually facilitating them, the breaching of how to deploy software with the preparation of the infrastructure. And in that case, actually one of the success stories as well is using tooling that is very much already used in the domains of the software functions to also make the management of the infrastructure. And that implies that there is cross pollution of knowhow and technology skills among different people. Those people that were only in the past focusing on pulling and plugging the cables, they can now actually nowadays understand by using other tools how to more efficiently actually make the configuration of that infrastructure that is being prepared.

(42:32):
And that is thanks to actually skilling the people on the teams and making sure that there is a common understanding and framework being applied all across the company by having a common department and a common team that takes care of preparation of end-to-end platform across all domains. And yeah, I think that it is very important that the organization in itself rethinks about how to group and reorganize the teams to make it more efficient and with the consideration that we are summarizing the network end to end, it makes a lot of sense that there is some dedicated teams that actually take responsibility end to end as well of the infrastructure.

Guy Daniels, TelecomTV (43:25):
Thank you very much indeed. And Dario, we'll come across to you.

Dario Sabella, ETSI MEC & xFlow Research (43:30):
Yeah, maybe just a comment. I agree with the previous speakers just on this. This is a typical question where we need to understand how to improve the effectiveness of the work in operational among different teams, which are by nature looking at different level of the business and then for example, data center, physical facility, they are the guys who are doing the hard work and maybe different from another kind of club of software developers. Guys that develops more. So actually this is a really typical question in the organization, how to make them working together. There are many kind of, this is also transversal problem in the organizations in my experience also, first of all, seeing the big picture, it's helping everybody to understand that even if this is not my work, I understand what the others are doing in the organization, but not just only that experiencing your first person, like a sort of job rotation.

(44:33):
Some companies are experimenting this kind of possibility offered to inside the company for the organization to experience directly how maybe if you are coming from a software skill perspective and you would like to see what's happening actually in the data center facilities and vice versa. It's a kind of job rotation program. But for me, the most powerful tool also is when you kind of make put in place some mechanism for shared accountability. So for the success study of the delivery itself end to end, there should be a shared responsibility of the groups and this is actually the key for the success, for the collaboration. There are tools also for doing that and not just only money or just let's say the bonus for the cross team collaboration, the organization at the top level. The board should also stimulate, encourage this kind of collaboration practically by means of a shared accountability. I believe this is also very much important for Operation

Guy Daniels, TelecomTV (45:44):
Dario, thank you very much indeed. Good place to end our conversation because we have to leave it there, but I'm sure we will continue this debate during our live q and a show later. For now though, thank you all for taking part in our discussion and if you are watching this on day two of our next Gen Digital Infra Summit, and please do send us your questions and we'll try and answer as many as we can in our live q and A show, which starts at 4:00 PM UK time. The full schedule of programs and speakers can be found on the TelecomTV website, which is where you'll also find the q and a form for your question. And of course the poll question that we ask for now though. Thank you for watching and goodbye.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Panel Discussion

After a period of ‘all-in’ public-cloud enthusiasm, many telcos are pulling certain workloads back on-prem or into operator-hosted private clouds. Surveys show a majority of companies are implementing private clouds, citing cost savings and tighter control of sensitive data. Modern private clouds now look and feel like public clouds: Containerised, API-first, pay-as-you-grow, but without the egress fees. For network operators this means lower network-function operating expenditure and a chance to offer sovereign cloud capacity to enterprise customers. This panel discusses architectural blueprints, data-lake patterns and the operational talent mix required to run them.

Featuring:

  • Dario Sabella, Chairman ETSI MEC, VP, xFlow Research
  • Diego R. Lopez, Senior Technology Expert, Telefónica and ETSI Fellow
  • Dr. Joan Triay, Manager and Network Architect, DOCOMO Communications Lab. Europe (DOCOMO Euro-Labs), Rapporteur, ETSI ISG NFV
  • Mark Gibson, VP, Software, ConnectiviTree
  • Vivek Chadha, SVP Global Sales Head, Cloud & GM, Rakuten Symphony MEA

Recorded October 2025

Participants

Dario Sabella

Chairman, ETSI MEC, VP, xFlow Research

Diego R. Lopez

Senior Technology Expert, Telefónica and ETSI Fellow

Dr. Joan Triay

Manager and Network Architect, DOCOMO Communications Lab. Europe (DOCOMO Euro-Labs), Rapporteur, ETSI ISG NFV

Mark Gibson

VP, Software, ConnectiviTree

Vivek Chadha

SVP Global Sales Head, Cloud & GM, Rakuten Symphony MEA