What do telcos want from AI?

  • Clarity, sanity, real-world use cases and careful regulation
  • That’s a demanding list – can Santa deliver?
  • Probably not, but it was worth asking
  • The arrival of generative AI has revamped expectations
  • See what our experts had to say during the recent AI-Native Telco Summit

As network complexity has increased, so too has the telco need to tap greater network intelligence. As a result, telcos have long seen artificial intelligence (AI), and machine learning in particular, as an essential work in progress to help them more effectively tackle the configuration, orchestration, maintenance, energy efficiency and general operational  challenges they face across their networks.

But it’s fair to say that the telco journey, from basic machine learning to something approaching full-blown AI, has so far been unspectacular and built around a relatively myopic worldview. It tends to target specific tasks and is based on predefined rules and data patterns gleaned from data largely sourced from its own networks and support systems – after all, if there’s one thing telcos have in abundance, it’s network data.

However, it could be argued that this approach might have become a hindrance for an industry attempting to think its way out of its low profitability rut. Perhaps telco AI needs to look beyond its traditional boundaries?

That’s where generative AI (GenAI) comes in. It aims to cast the all-important training data net wider: In this case, beyond the telco itself and even beyond the rest of its ecosystem and peers and into the world at large. Out there might be found valuable data and insights capable of being tacked together with a telco’s own data to provide pointers to business strategy, tactics and political manoeuvring to help aspiring digital service providers (DSPs) to better make their way in the world.

Perhaps an exciting AI breakthrough to provide something greater than the simple promise of slightly better performance from the existing industry framework is what the telecom sector has been waiting for?

So will ChatGPT, with its large language model (LLM) GenAI, and its ilk prove to be the game changer that many would like?

Is GenAI the saviour or just a naughty boy?

Launched into the public domain in late 2022, ChatGPT hit the headlines as a cloud-based online application offering artificial intelligence at an apparently low cost and with the alleged ability to tackle dozens of applications given the right training.

According to its proponents, amongst its many attributes was an ability to create original content, such as images, videos and text, that appeared indistinguishable from content created by humans and was, therefore, useful for applications such as entertainment, advertising, and creative arts, along with fraud and deception of various kinds (more on that later).

Its launch was  greeted with both awe and scorn by many of the first people to try it out. Awe, because it appeared to be a big step up from, say, Google Search, by not only gathering information against keywords, but by performing large language analysis on it. Scorn, because while its ability appeared magical at first, the app quickly proved itself prone to hallucination and produced howling errors – not little ones (like getting a few numbers wrong) but huge and obviously wrong assertions and conclusions.

Despite the scorn, ChatGPT’s viral impact indicated to many that it had real power – and probably growth and profit as well. Consequently all the tech behemoths are now either pushing on with their GenAI models or researching feverishly, confident that training (the models) and increasing their refinement will overcome the difficulties. 

But not everyone has enthusiastically joined the GenAI party. Some thought-leaders are cloistered in the kitchen where they’re sharing a range of concerns  over its impact.

Quite apart from the hallucination and reliability problem, GenAI’s ability to enable mischief looks enormous – from cheating and plagiarism by students, to commercial surveillance, and customer manipulation by corporations and governments.

Deep faking was already a problem, but with GenAI apparently poised to make it widely and inexpensively available, it may well become endemic. But what really worries some observers is how GenAI might enable the already powerful to strengthen their grip. To be clear, the existential fear is not so much that the machines might “take over because they’re smarter than us,” as Elon Musk put it (finally proving that he isn’t!). Instead, it seems more likely that those who control them might amass enough power and media control to become unstoppable.

According to IT industry observer and critic Cory Doctorow, the intelligent platforms powering technology powerhouses such as Google, Facebook and Amazon are already enabling what he callsenshittification”. The way Doctorow and others see it, corporations have discovered that they can curate their already powerful and flexible online platforms to limit consumer choice, blunt competition, form near-monopolies and ratchet up prices in the process. So, with platform managers and programmers focused on “the metrics” as their primary measure of success, it’s been easy for ‘enshittification’ to gather pace.

One relatively minor example (there are many) is to block or slow customer churn. Here, a software change – flagged as a minor “upgrade” to a customer departure procedure – can inhibit thousands of customers from leaving to produce dollars for the bottom line. To be fair it's not that the technology, or even its owners and managers, are necessarily dangerous or evil. It’s that the temptation to apply ‘enshittification’ in tiny, cunning morsels proves difficult to resist or even recognise – a twiddle here, a tweak there, et voila, the user/customer finds itself in a digital maze with no obvious exit signs to hand. That’s just one, relatively crude ‘enshittification’ move, often and obviously practised, but the worry is that armed with generative AI, platform managers can become even better at cunning knob twiddling and lever-pulling to further their businesses and increase their market power – and prominent observers have noticed.

Most recently, general angst about unbridled surveillance and worries about AI accountability and lack of regulation, has seen prominent tech worthies line up with community leaders and politicians to call for a ‘pause’ on GenAI development – a concern that’s even rippling out to infect sober entities such as insurance company AXA SA, which is calling for a GenAI pause following a survey of risk specialists.

Meanwhile, the excitable Elon Musk, attending the UK government’s AI Safety Summit, warned that AI posed “one of the biggest threats” to humanity. (Strange, coming from a man who had just invested in GenAI, but then, Musk has never seen a spotlight that he didn’t want to bathe in).

To counter the AI naysaying, however, there has also been plenty of positive sentiment.

One set of numbers suggested that GenAI investment had already boosted enterprise spending on cloud infrastructure by 18% year on year; while in Asia Pacific, IDC has reported that generative AI is gaining popularity in IT operations and service management, with 43% of the organisations it surveyed currently exploring potential GenAI use cases and 55% of financial organisations and telecom firms investing in GenAI technology in 2023.

In another example, our recent Telecom’s Take on AI DSP Leaders report also indicated that the telecom sector had already taken AI to its collective bosom with a majority of the telecom industry respondents claiming that their organisations had an AI strategy, and a majority of those claiming that their strategies leaned towards GenAI. Not only that, but more than half believed that GenAI was set to provide a boost to their sales within the next 12 months, with only 24% expecting to see no impact over that period.

While most respondents thought GenAI powerful, 76% also believed it requires regulation, while only 11% thought it shouldn’t be regulated at all.

Expert opinions

So how positively do the industry experts that participated in the recent AI Native Telco Summit think the industry should regard AI?  

Certainly, setting aside GenAI for the moment, our panels and live Q&A sessions revealed  much agreement on the fundamentals of AI. Sentiments included that AI is an excellent tool for exploiting network data to help refine the network’s operation, in addition to things like cost management, fault-finding and so on.

On the basis that AI can suffer greatly from the GIGO (garbage in, garbage out) concept, great importance was placed on data gathering to prepare the ground – standardising and storing it has become key. Once it’s standardised, it can be shared with other telcos to provide comparisons and help with predicting faults and problems before they occur. “Service assurance is a great [AI] use case,” offered Mark Gilmour, CTO of ConnectiviTree.  “Also, anomaly correction and anomaly correlation are ‘low-hanging fruit’ with a clear-cut use case,” he added. Other examples of telco AI use cases were root cause analysis and helping developers to build better software. 

Beth Cohen, head of SDN product strategies at Verizon, thought natural language processing was an important area for telcos and something she claimed Verizon “had been using for around for 10 or 15 years for “sentiment analysis”, where customers’ phrasing was analysed to indicate the degree of anger displayed during a customer call. This shows how far telco AI could fan out beyond its network technology to pull behavioural data in order to better understand customers.

Unifying the operations administration data models for AI is a problem. Jason Hogg, the general manager of Azure for Operators at Microsoft, observed that the complexity of many network and telemetry functions limits the ability for a wide number of people to access the data. “The big idea five or six years ago [to overcome this constraint] was the data lake. This was where you dumped all your data into one location and the [AI] magic would happen – it didn’t,” he said.

“Now what we’re looking at is a new way of architecting the datasets to make them available in much the same way as compute microservices are. Now with the datasets you can actually have data packages that are life-cycle managed independently,” he said.

“We’re getting better at realising we need to plough through large amounts of data,” agreed Verizon’s Cohen. The real solution was brute force, she said. “You just needed to throw lots of processing power at the data.”

AI and network automation

The summit dwelt on so-called AIOps – the application of AI to facilitate end-to-end network automation and zero-touch operations. According to ConnectiviTree’s Gilmour, this approach isn’t necessary for absolutely everything but it does show a lot of value for network performance improvement. He said using data models to foresee degradation in both optical and mobile networks is a valuable use case as is “analysing against performance metrics” to implement things like deterministic routing. He added that it’s important to remember that you don’t need to “boil the ocean” but can limit the datasets used and the processing  required to reach useful guidance with a narrower approach, using local and internal data to achieve the desired results.

But that focus could easily change if GenAI proves its value for telcos by successfully folding in unstructured language. “We’re looking across the big areas of spend to see where AI could be applied,” said Danielle Rios Royston, acting CEO of cloud-based BSS vendor Totogi.

“We have loads of [call centre] customers calling in with issues and a vast amount of data  encapsulating years of call centre history with answers written by humans, thus providing a learning model that you could throw against a large language model,” she noted, adding that many customer queries in the call centre software business could be quickly resolved using an LLM coupled with interactive voice.

The likely value of a large language model was an important point of contention between those who advocated strongly for “going big” with Gen AI and others who worried about overkill, given the extra cost and effort required.

“There may be a distinction to be drawn between the ‘localised’ AI models that can be used in a localised environment, versus the large-scale generative AI that Danielle has also talked about and is probably more suited to the public cloud,” said Gilmore, referring to Royston’s trademark evangelism for the public cloud for (nearly) all things.

“Not many of our operational functions are that big. I don’t need a vast language learning model that is picking up on everything that’s going on across the world, if I’m purely focused on routing across my network,” said Gilmore.  

To GenAI or not to GenAI?

That’s not even the question, it seems. The questions are: “When?”, “in what form” and “who will deliver it?”

According to industry analyst Richard Windsor, author of the daily Radio Free Mobile blog, today’s GenAI leaders may already have been overtaken by a new wave of investment, which lately has seen a flood  of products and approaches, including a Chatbot from (you guessed it) Elon Musk and trained on Twitter data (God help us!). 

Windsor reported that a combination of new startups and open-source activity means that the number of available AI services is increasing at a breakneck pace. He added that there are also variations on the generative theme – one that seems feasible is Microsoft’s ‘Autogen’, which ropes together multiple AI agents, each with a particular specialisation, and orchestrates them to chew on complex problems and spit out a result. Naturally it claims that this approach often outperforms individual AI models.

So a catastrophic ending for GenAI is unlikely, despite all the existential angst. There are, however, bound to be major bumps on what still looks like being a long and winding road.

- Ian Scales, Managing Editor, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.