What’s up with… Bell Canada, SK Telecom, Stargate

  • Bell Canada unveils wholesale FTTP network plan in the US
  • SK Telecom is facing billions in losses as a result of its data breach
  • OpenAI adds international flavour to the Stargate Project

In today’s industry news roundup: Bell Canada forms Network FiberCo to expand Ziply Fiber’s reach in underserved US markets; despite protection efforts, SKT is facing huge customer and financial losses following its recent data breach; US AI giant OpenAI wants to help, and get investments from, overseas companies that are seeking their own Stargate project-like deployments; and much more!

Bell Canada (BCE) has forged a strategic partnership with Public Sector Pension Investment Board (PSP Investments), one of Canada’s largest pension investors, to form Network FiberCo to “accelerate the development of fibre infrastructure through Ziply Fiber, in underserved markets in the US.” In November 2024, Bell Canada struck a deal worth CAN$7bn (US$5bn) to acquire Ziply Fiber, which operates in the north-west region of the US. Network FiberCo will be a wholesale network operator “focused on last-mile fibre deployment outside of Ziply Fiber's incumbent service areas, enabling Ziply Fiber to potentially reach up to 8 million total fibre passings,” the Canadian operator noted. PSP Investments has “agreed to a potential commitment in excess of US$1.5bn”. Mirko Bibic, president and CEO of Bell Canada, stated: “Today’s announcement represents a pivotal step in BCE’s fibre growth strategy. By bringing PSP Investments’ financial resources and acumen to Ziply Fiber, we are creating a scalable, capital-efficient platform to fund US fibre footprint expansion. This strategic partnership will improve free cash flow generation and strengthen EBITDA [earnings before interest, taxes, depreciation and amortisation] accretion over the long term, reinforcing our commitment to delivering long-term value for shareholders while maintaining financial discipline.”

There’s no let-up for SK Telecom in the wake of its major data breach last month, following which it has been implementing multiple measures to protect and reassure customers, including a SIM protection service that has now been put in place for more than 20 million customers. The operator has cancelled new customer signups and, at the same time, it has been losing customers to rivals, and that combination spells financial danger for the operator. According to SKT’s CEO, Ryu Young-sang, the number of customers quitting SKT’s services could soon reach 2.5 million, about 10% of its customer base before the cyber breach. If cancellations reach as high as 5 million, SKT estimates it could lose up to 7tn won ($5bn) over the next three years from lost customer revenues and waived termination fees, the CEO told a parliamentary session on 8 May, according to Yonhap News.  

OpenAI is expanding the reach of The Stargate Project, the $500bn AI infrastructure mission it launched in January with Japan’s SoftBank Group, Oracle and Abu Dhabi-based tech investment firm MGX. OpenAI announced that in response to outreach from many countries that “want their own Stargates and similar projects,” it is keen to help and has launched OpenAI for Countries, “a new initiative within the Stargate project. This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power.” But there’s a payback too, of course. OpenAI’s “new kind of partnership for the intelligence age” includes partnerships to help with datacentre deployments, customised versions of OpenAI’s ChatGPT, security and safety controls for AI models, and help with starting national startup funds. But as part of the package, “partner countries also would invest in expanding the global Stargate Project –  and thus in continued US-led AI leadership and a global, growing network effect for democratic AI.” It added that the goal is to “pursue 10 projects with individual countries or regions as the first phase of this initiative, and expand from there.” Read more

Cybersecurity risks are spreading quickly as AI agents “go rogue”. According to a report from US technology website Axios, autonomous AI agents are going ‘off piste’ and, if humans are to retain control over them, they must quickly be corralled, hobbled and managed in much the same way as the human employees of cubicle land. It seems that companies and organisations eager to be first adopters of AI agents for mission-critical tasks are facing problems caused by those agents as they exhibit autonomous decision-making and take actions for themselves. Whereas ‘traditional’ AI is task specific, agentic AI is imbued with the ability to pursue complex goals with limited human supervision. Such systems can understand context, reset goals and adapt their behaviour in dynamic environments. In doing so, they behave more like human employees in their capacity to ‘think’ independently when solving complex problems. They learn, and their learning can take into account new data and changing conditions, and they can understand and interpret the context of their environment and tasks. They can also interact with different tools and applications via APIs which increase their scope and ability to execute tasks that may not have been part of their original programmed brief. They reason, perceive, act and learn. What makes them so versatile and effective could also make them scary and potentially troublesome. That’s why vendors of security solutions are scrabbling around to build products that will keep agentic agents firmly in their place and under supervision before the levels of trust compliance and control that are so vital to AI are undermined to the point that the technology becomes a liability. The Axios report points out that, at the very least, agentic AI could cause incidental data breaches, the misuse of login credentials and the leakage of sensitive information. Even worse, damage could result if the AI is not supervised properly. David Bradbury, chief security officer at Otka, the San Francisco-headquartered identity security, user authentication and access management company, told Axios that AI agents cannot be treated “like human identity” but must be capable of achieving the “elevated high trust” afforded to human agents – but “in a new way.” How true! Time is pressing: A recent Deloitte investigation found that 25% of companies that use generative AI will launch agentic AI pilots this year and 50% will do so by 2027. So, how will non-human identities be secured? The immediate answer seems to be the close monitoring of the files and systems of such tools and “constantly rotate out their passwords.” It seems securing the identities of AI agents won’t require much additional innovation, “but the stakes are higher since those agents could be given free rein on a company’s network.” The ultimate answer, for now at least, is simple. Organisations should create a “kill switch” for any and all AI agents operating on their networks. That’s all well and good but if an agent or agents have gone rogue, the damage will have already been done before the kill switch was used and reinstating the systems and networks to what they were will be a difficult, lengthy and expensive task and the reputational damage to an organisation could be massive. It is thought that sometime soon AI agents will be managing other agents – and human employees will have to be trained to manage them. In case of trouble, and there will be trouble, the short-term solution may be to issue each human manager of AI agents with a 5lb lump hammer and a bucket of water.

– The staff, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.