Sales of semiconductors performing AI functions set to triple by 2025 as AI reshapes chip markets

The global market for memory and processing semiconductors used in artificial intelligence (AI) applications will soar to $128.9 billion in 2025, three times the $42.8 billion total in 2019, according to IHS Markit | Technology, now a part of Informa Tech.

Within the AI segment, worldwide revenue from memory devices in AI applications will increase to $60.4 billion in 2025, up from $20.6 billion in 2019. The processor segment will expand slightly faster, growing to $68.5 billion in 2025, up from $22.2 billion in 2019.

This total tracks sales of semiconductor content in systems that run AI functions. These chips include memory and processing devices within systems that can run AI applications.

AI chips are used widely in various markets, including automotive, communication, computers, consumer electronics, industrial and healthcare. The largest single market for memory devices in AI applications is the computer segment, with sales rising to $65.9 billion in 2025, increasing at a 15.7 percent compound growth rate (CAGRs) from $27.5 billion in 2019. However, other segments will generate faster growth, including the communication, consumer electronics, industrial and healthcare sectors.

“Semiconductors represent the foundation of the AI supply chain, providing the essential processing and memory capabilities required for every artificial intelligence application on earth,” said Luca De Ambroggi, senior research director for AI at IHS Markit | Technology. “AI is already propelling massive demand growth for microchips. However, the technology also is changing the shape of the chip market, redefining traditional processor architectures and memory interfaces to suit new performance demands.”

AI-driven processor architectures emerge

Several startups now are aiming to offer completely new architectures that will challenge the market supremacy of traditional devices used for AI processing, such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), microprocessors (MPUs), microcontrollers (MCUs), and digital signal processors (DSPs). These new architectures include capabilities such as integrated vector-processing, which can accelerate deep-learning tasks.

Moreover, the introduction of AI-related capabilities into various devices means that these traditional classes of processors are evolving to the point where they are no longer recognizable as distinct categories.

“The old definitions of what makes an MPU, DSP or MCU are beginning to blur in the AI era, as each type of device adds cores with different core functions,” De Ambroggi said. “Increasingly, designers of AI-enabled systems are using highly integrated heterogenous processing solutions, such application-specific integrated circuits (ASICs) and system-on-chip (SoC) solutions. With processor makers offering turnkey, heterogenous processing solutions using these ASICs and SOCs, it makes less difference to system designers whether their AI algorithm is executed on a GPU, CPU or DSP.”

Overcoming the AI memory bottleneck

Advanced AI technologies, specifically deep-learning algorithms, require huge amounts of high-bandwidth volatile memory to perform properly. However, increasing the memory bandwidth to the levels needed for AI algorithms also can drive up power consumption to unsustainable levels.

To address this challenge, the semiconductor industry is studying some innovative approaches, including:

  • A new processor architecture wherein the memory is closer to the computational core, reducing the burden of data movement and enabling high processing parallelism with dedicated memory cells for each processing core.
  • Moving the early stages of data computation into the memory, a technique called processing into memory (PIM). PIM delivers similar benefits as those mentioned above.
  • Identifying new memory technologies that can enable new approaches with easy back-end silicon integration, volatile performance, a non-volatile capability, low pico-joule per byte or a new and a fast input/output (I/O) interface.

The Artificial Intelligence - Technology Adoption & Impact Report – 2019 tracks the development, impact, and disruption caused by AI technologies across industries. The report includes revenue forecasts for machine-learning-based equipment by industry and unit shipment forecasts for silicon solutions by domain and application type.

This content extract was originally sourced from an external website (IHS Technology Press Releases) and is the copyright of the external website owner. TelecomTV is not responsible for the content of external websites. Legal Notices

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.