- Bell Canada announced its AI Fabric strategy in May
- It is investing in a national network of AI infrastructure facilities
- Buzz HPC, another Canadian company, is providing the Nvidia-based tech stack that will underpin Bell’s AI services plans
Bell Canada is quickly racking up a roll call of domestic partners for its AI plans, a move that will give extra credibility to its forthcoming sovereign services: Having already hooked up with Canadian large language model (LLM) developer Cohere, Bell has now turned to Buzz High Performance Computing (HPC), a subsidiary of Vancouver-based Hive Digital Technologies, for the Nvidia-based tech stack that will underpin its Bell AI Fabric strategy.
In late May, Bell Canada unveiled its Bell AI Fabric strategy that will see the operator invest in a national network of AI infrastructure facilities “starting with a datacentre supercluster in British Columbia that will aim to provide upwards of 500 MW of hydro-electric-powered AI compute capacity across six facilities”.
At the time, Mirko Bibic, Bell Canada’s president and CEO, stated: “Bell’s AI Fabric will ensure that Canadian businesses, researchers and public institutions can access high-performance, sovereign and environmentally responsible AI computing services. Through this investment, Bell is immediately bolstering Canada’s sovereign AI compute capacity, while laying the groundwork to continue growing our AI economy. This is transformational for our customers, for Canada and for Bell.”
Then in late July, the telco announced it was teaming up with large language model (LLM) developer Cohere to “provide full-stack sovereign AI solutions for government and enterprise customers across Canada, and to deploy proprietary, secure AI solutions within Bell.”
Now Bell has announced that Buzz is to “deliver one of Canada’s largest sovereign AI ecosystems through Bell AI Fabric”. The company, which develops high-performance computing systems designed especially for its customers, will provide Bell’s enterprise and government customers with access to a range of Nvidia GPU (graphics processing unit) clusters, including some based on Nvidia’s latest Blackwell products, that are interconnected using Nvidia’s Quantum-2 InfiniBand networking technology.
Bell noted in this announcement: “Buzz HPC’s large-scale Nvidia accelerated computing infrastructure, purpose-built for AI, machine learning and scientific computing, will be integrated with Bell AI Fabric’s advanced fibre network, datacentres and partner ecosystem, including Cohere. This combined capability supports a range of use cases, including developing AI foundational models and fine-tuning existing models all within Canada.”
By hosting the AI infrastructure across multiple provinces, Bell will be able to offer its customers access to Nvidia clusters hosted in local facilities that comply with strict data residency and cybersecurity regulations, the telco noted.
The first facility to come online as a result of the Bell/Buzz partnership will come online “later this year”, with a 5 megawatt (MW) deployment in Manitoba, and will be followed by “expansion into other Bell AI Fabric datacentres”, noted the telco.
The announcement once again highlights the importance of Nvidia’s technology to telco AI infrastructure/AI factory plans: Nvidia technology is already at the heart of more than 20 telco AI factories around the world – see Asia is a hotbed of telco AI factories.
John Watson, group president of business markets, AI and Ateko at Bell Canada, stated: “Buzz HPC is one of the few Canadian cloud service providers with a purpose-built AI cloud that has experience operating GPU clusters at scale. We are excited to partner with Buzz HPC for its AI infrastructure solutions – an important layer in the Bell AI Fabric ecosystem delivering the advanced workloads our customers need in a sovereign, private and secure Canadian facility.”
Bell noted that the partnership with Buzz gives the telco “a comprehensive AI solution for the Bell AI Fabric ecosystem. Buzz HPC provides the foundational hardware layer for large-scale AI workloads; Cohere delivers customised large language models; Ateko [Bell’s tech services unit] brings specialised professional services; and Bell offers Canada’s most advanced network and datacentre backbone all working together to accelerate Canada’s leadership in artificial intelligence.”
It’s noticeable, though, that there is no mention there of Groq, the US vendor that develops and builds hardware specifically designed to accelerate the inference of large language models (LLMs), including its language processing unit (LPU), an application-specific integrated circuit (ASIC) designed specifically for AI workloads. When Bell first announced its AI Fabric, it stated that its first AI Fabric facilities were due to come online in June 2025 in partnership with Groq at the US firm’s 7 MW AI inference facility in Kamloops, British Columbia, “powered by Groq’s cutting-edge LPUs”.
It added: “Bell has selected Groq as its inference infrastructure partner to support the development of sovereign AI in Canada, ensuring that customers have access to the most up-to-date technology to power their AI workloads. Groq’s advanced LPUs deliver faster inference performance than other processing units at significantly lower costs per token than existing market alternatives.”
TelecomTV has reached out to Bell Canada to confirm that the Groq relationship is still part of its AI Fabric strategy.
UPDATE Bell Canada responded with the following message: Groq remains a key AI inference partner, and the Kamloops facility came online in June as planned. While the recent announcement focused on Buzz’s Nvidia-powered infrastructure for training and scientific workloads, Groq continues to play a vital role in Bell’s AI Fabric for high-speed, low-latency inference.
- Ray Le Maistre, Editorial Director, TelecomTV
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.