NVIDIA Accelerates Google Quantum AI Processor Design With Simulation of Quantum Device Physics

NVIDIA CUDA-Q Platform Enables Google Quantum AI Researchers to Create Massive Digital Model of Its Quantum Computer to Solve Design Challenges

NVIDIA announced it is working with Google Quantum AI to accelerate the design of its next-generation quantum computing devices using simulations powered by the NVIDIA CUDA-Q™ platform.

Google Quantum AI is using the hybrid quantum-classical computing platform and the NVIDIA Eos supercomputer to simulate the physics of its quantum processors. This will help overcome the current limitations of quantum computing hardware, which can only run a certain number of quantum operations before computations must cease, due to what researchers call “noise.”

“The development of commercially useful quantum computers is only possible if we can scale up quantum hardware while keeping noise in check,” said Guifre Vidal, research scientist from Google Quantum AI. “Using NVIDIA accelerated computing, we’re exploring the noise implications of increasingly larger quantum chip designs.”

Understanding noise in quantum hardware designs requires complex dynamical simulations capable of fully capturing how qubits within a quantum processor interact with their environment.

These simulations have traditionally been prohibitively computationally expensive to pursue. Using the CUDA-Q platform, however, Google can employ 1,024 NVIDIA H100 Tensor Core GPUs at the NVIDIA Eos supercomputer to perform one of the world’s largest and fastest dynamical simulation of quantum devices — at a fraction of the cost.

“AI supercomputing power will be helpful to quantum computing’s success,” said Tim Costa, director of quantum and HPC at NVIDIA. “Google’s use of the CUDA-Q platform demonstrates the central role GPU-accelerated simulations have in advancing quantum computing to help solve real-world problems.”

With CUDA-Q and H100 GPUs, Google can perform fully comprehensive, realistic simulations of devices containing 40 qubits — the largest-performed simulations of this kind. The simulation techniques provided by CUDA-Q mean noisy simulations that would have taken a week can now run in minutes.

The software powering these accelerated dynamic simulations will be publicly available in the CUDA-Q platform, allowing quantum hardware engineers to rapidly scale their system designs.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insideainews/

Join us on Facebook: https://www.facebook.com/insideAINEWSNOW

Check us out on YouTube!