NPUs are Emerging as the Main Rival to NVIDIA’s AI Dominance

Print Friendly, PDF & Email

NVIDIA’s competitive advantage is built around graphics processing units (GPUs), which are large, expensive and energy intensive

Fueled by the rise of Generative AI, the semiconductor market has established itself as one of the most profitable industries globally − the sector is already worth over $600 billion, with this figure set to increase to $1 trillion by 2030. During this growth period, NVIDIA has achieved the status of undisputed leader, predominantly as a result of the superior performance of its graphics processing units (GPUs).

However, the high performance of GPUs in terms of raw computing power comes at a price. These chips are both expensive and energy intensive, calling into question whether their widespread use is sustainable in the long term.

According to Dorian Maillard, Vice President at DAI Magister, environmental concerns are driving the development of more energy-efficient algorithms and hardware, which could lay the foundations for the mass adoption of domain-specific processors optimized for executing AI tasks efficiently, known as neural processing units (NPUs).

Maillard said: “Despite efforts from companies like Microsoft, AWS, and Google to develop their own AI GPU and NPU chips, NVIDIA remains the clear frontrunner in the AI hardware market due to the high performance and established ecosystem of its GPUs. Nonetheless, NVIDIA’s dominance in the GPU space overshadows two fundamental issues: high capital expenditure and energy consumption related to running AI.

“It is estimated that a single AI search query consumes up to 10 times more energy than a standard Google search, highlighting the need for initiatives that mitigate the costs and carbon footprint of AI whilst remaining competitive with NVIDIA’s performance.

“This problem has given rise to a new type of chip: the neural processing unit, or NPU. NPUs are engineered to accelerate the processing of AI tasks, including deep learning and inference. They can process large volumes of data in parallel and swiftly execute complex AI algorithms using specialized on-chip memory for efficient data storage and retrieval.

“While GPUs possess greater processing power and versatility, NPUs are smaller, less expensive and more energy efficient. Counterintuitively, NPUs can also outperform GPUs in specific AI tasks due to their specialized architecture.

“Key NPU applications include enhancing efficiency and productivity in industrial IoT and automation technology, powering technologies such as infotainment systems and autonomous driving in the automotive sector, enabling high-performance smartphone cameras, augmented reality (AR), facial and emotion recognition, and fast data processing.

“GPUs and NPUs can also be deployed in tandem to deliver greater efficiency. In data centers and machine learning/deep learning (ML/DL) environments to train AI models, NPUs are increasingly being integrated to complement GPUs, especially where energy conservation and low latency is required.”

Maillard concluded: “We expect fundraising activity in the AI-related NPU edge device sector to continue its upward trajectory. Several factors will drive this momentum: the growing importance of AI in almost all industries, increasing investments in R&D, and a surge in demand for high-performance, low-power chips.

“Moreover, with larger tech giants like Microsoft, AWS, and Google actively seeking to develop or acquire AI chip technologies, market consolidation is on the horizon. These tech behemoths are not only seeking to expand their capabilities but also to ensure they remain competitive against NVIDIA’s formidable presence.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insideainews/

Join us on Facebook: https://www.facebook.com/insideAINEWSNOW

Speak Your Mind

*