Nvidia flexes its AI muscles, targets 6G R&D

  • Nvidia is using its GTC 2024 gathering to hammer home its advantage in the AI technology sector
  • It made 40 separate announcements, including one unveiling its new GPU chip, on the first day of its event
  • The vendor is also upping the ante in the radio access network market with its 6G Research Cloud platform
  • And it has enhanced its relationships with the major hyperscalers

With the AI-infused MWC24 done and dusted, Nvidia has pulled back the curtain on its own in-person event, the GTC (GPU Technology Conference) 2024, and is making the most of its market-leading position. It has announced a massive range of new products, relationships and initiatives, including the next move for the telecom sector, a new graphics processing unit (GPU) and enhanced relationships with the major cloud platforms that are currently the vendor’s biggest customers. 

On day one of the event, held in San Jose, California, and streamed online, Nvidia made 40 separate announcements, which is a lot to take in. The announcement that has attracted the most buzz is the release of its next-generation AI GPU microprocessor, the B200 Blackwell that, the company claims, can manage some calculations and operations 30 times faster than its predecessor.

The move is clearly designed to solidify Nvidia’s position as the leading provider of the processors that enable AI training and inference processes. The company has an estimated 80% share of the AI GPU market, a leadership position that has driven massive sales growth at the company and sent its market valuation beyond $2.2bn, making it the third-biggest US company after Microsoft and Apple. And its market share is likely go up even further before competition from the likes of AMD, Intel and others starts to eat away at its position, as tech giants such as Amazon, Google, Microsoft and OpenAI have indicated that it makes more sense for them to buy Nvidia’s chips and develop deep partnerships with the company rather than go to the expense of developing their own AI chips (more on such relationships later). 

The Blackwell is named after David Blackwell, the US statistician and mathematician who invented dynamic programming that today is commonly used in the global finance industry and in many sciences. His work also greatly advanced game theory, probability theory and information theory. Nvidia’s new chip is the successor to the hugely influential and massively lucrative H100 Hopper series that has powered the advance of AI in recent years. Introducing the new product at GTC 2024, Nvidia’s founder and CEO, Jensen Huang, said the Blackwell is “now twice as fast as Hopper”, but very importantly, it has “computation in the network” to make it run even faster. So fast that, Huang claimed, it will be able to do astonishing things, such as “turning speech into 3D video.” 

This, and other remarkable capabilities, are possible because the Blackwell chip has 208 billion transistors, an almost unbelievable number and 128 billion more than the Hopper. The mind actually does begin to boggle. Huang added that the new chip has five times the AI performance of the Hopper, whilst lowering energy consumption by a factor of 25. 

He stated: “There’s no memory locality issues, no cache issues, it’s just one giant chip” of immense power and utility, but it will be “quite expensive” to buy. 

Well, the H100 Hoppers cost more than $30,000 each and although the price of the B200 Blackwell is yet to be released, it is likely to be eye-wateringly expensive.  Nonetheless, whatever the price, such is the pace of AI development that Nvidia customers will be queuing up to get their hands on some Blackwells regardless of their cost. The new Blackwell chip should be on the market by this summer. 

Among those in the queue to get them will be the hyperscale cloud giants, with which Nvidia further cemented its relationships too during the opening hours of GTC 2024. It made extended partnership announcements with Amazon Web Services, Microsoft (including, but not limited to, its Azure cloud operation), Google Cloud and Oracle (including its cloud operations).  

Nvidia also used GTC 2024 to hammer home its intention for the telecom sector. The vendor has been forging relationships with radio access network (RAN) equipment vendors and network operators alike in recent months and during MWC24 announced the formation of the AI-RAN Alliance, which is focused on a potential next-generation architecture for mobile access networks – see AI-RAN Alliance launches at #MWC24.

Now it has introduced the 6G Research Cloud platform, a new set of software tools through which AI can be applied to the RAN. It comprises three main elements: The Aerial Omniverse Digital Twin for 6G, a “reference application and developer sample that enables physically accurate simulations of complete 6G systems, from a single tower to city scale”; the Aerial CUDA-Accelerated RAN, a “software-defined, full-RAN stack that offers significant flexibility for researchers to customise, program and test 6G networks in real time”; and the Sionna Neural Radio Framework, which “provides seamless integration with popular frameworks like PyTorch and TensorFlow, leveraging Nvidia GPUs for generating and capturing data and training AI and machine learning models at scale.” 

The platform “allows organisations to accelerate the development of 6G technologies that will connect trillions of devices with the cloud infrastructures, laying the foundation for a hyper-intelligent world, supported by autonomous vehicles, smart spaces and a wide range of extended reality and immersive education experiences and collaborative robots,” according to the vendor, which has signed up Ansys, Arm, ETH Zurich, Fujitsu, Keysight, Nokia, Northeastern University, Rohde & Schwarz, Samsung, SoftBank and Viavi among its initial 6G Research Cloud partners.

And still in the telecom domain, Singtel separately announced it will be launching its GPU-as-a-Service (GPUaaS) in Singapore and South-east Asia in the third quarter of this year, “providing enterprises with access to Nvidia’s AI computing power to drive greater efficiencies to accelerate growth and innovation.” Singtel first announced its collaboration with Nvidia earlier this year – see Singtel strikes green AI and datacentre partnerships.

Nvidia also announced a new suite of chips that can run chatbots in a car or truck (how absolutely wonderful and indispensable) and another GPU family for the creation of humanoid robots. Nvidia certainly dominates the AI infrastructure sector today, but big rivals such as AMD and Intel are not sitting idly by: Intel has its Gaudi AI accelerator and AMD has its Instinct in the works, whilst startups such as Cerebras are also making waves. 

With demand for AI chips at an all-time high and forecast to continue to grow throughout this year and next, at the very least, companies with AI chip inventory in stock and ready to be shipped will win, while those without immediately available stock will lose out. The laws of the market apply even in the rarified heights of the highest of high-tech.

– Martyn Warwick, Editor in Chief, TelecomTV. With additional reporting by Ray Le Maistre, Editorial Director, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.