https://orionx.net/wp-content/uploads/2025/03/100@HPCpodcast_ID_Dr-Ian-Cutress_State-of-AI-Advanced-Chips_20250312.mp3 Just before GTC (and for the 100th episode of the @HPCpodcast and this one sponsored by liquid cooling company CoolIT), we welcome special guest and high-powered chip industry analyst Dr. Ian Cutress, Chief Analyst at More Than Moore and host of the popular YouTube channel TechTechPotato to discuss the state of AI and […]
@HPCpodcast: Dr. Ian Cutress on the State of Advanced Chips, the GPU Landscape and AI Compute, Global Chip Manufacturing and GTC Expectations
Axelera AI Wins EuroHPC Grant of up to €61.6M for AI Chiplet Development
AI hardware maker Axelera AI has unveiled Titania, which the company described as a high-performance, low-power and scalable AI inference chiplet. Part of the EuroHPC Joint Undertaking’s effort to develop a ….
Fluidstack and Eclairion to Deliver 18K GPU Supercomputer in France
London-based AI cloud platform Fluidstack and Eclairion, a French maker of modular, high-density data centers, have partnered to build what the companies said is Europe’s largest GPU supercomputer that they will deliver in 2025 for Mistral AI ….
TSMC to Invest $100B in 3 New U.S. Fabs, Packaging, R&D
TSMC (TWSE: 2330, NYSE: TSM) today announced its intention to expand its investment in advanced semiconductor manufacturing in the United States by an additional $100 billion. Building on the company’s ongoing $65 billion investment in its advanced chip fabs in Phoenix, TSMC’s total investment in the U.S. is expected to reach US$165 billion. The expansion […]
d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI
Today, d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, an Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform further builds the scale-out chiplet based inference compute platform. d-Matrix customers will be able to use the inference compute platforms to manage Generative AI applications and Large Language Model transformer applications with a 10-20X improvement in performance.