SC24: Technical Program Leaders Discuss Their Role and Scientific Vision

Science lies at the heart of the annual Supercomputing conference, and the Technical Program is one of the most important and challenging aspects of the conference. To learn more about what this program does, as well as the scientific vision that drives every decision within the program, here’s an interview with SC24 Technical Program Chair Guillaume Pallez (Inria) and Vice Chair Judith Hill (LLNL).

Aurora Supercomputer Ranks Fastest for AI

At ISC High Performance 2024, Intel announced in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE) that the Aurora supercomputer has broken the exascale barrier at 1.012 exaflops and is the fastest AI system in the world dedicated to AI for open science, achieving 10.6 AI exaflops. Intel will also detail the crucial role of open ecosystems in driving AI-accelerated high performance computing (HPC).

The Infrastructure behind the Outputs: Cloud and HPC Unlock the Power of AI

In this contributed article, Philip Pokorny, Chief Technology Officer for Intelligent Platform Solutions/Penguin Solutions at SGH, provides insights regarding the relationship between high-performance computing (HPC) and generative AI and his expert point-of-view of the growing market. The increasing momentum behind generative AI in recent months has raised the prospective capabilities of enterprise businesses. At the forefront of this technology will be those that leverage HPC to create their solutions.

Life is Fleeting, But Data is Forever – Meet your Digital Twin

[SPONSORED POST] With the transformation of medicine from analog to digital, plus the rise of new data-generating devices for health tracking and genomic information, we can look forward to a new world in which virtually every aspect of a patient’s medical history can be communicated, stored, and manipulated. For each patient, this huge body of data represents a sort of digital twin, a treasure trove of useful medical information and insights that could become invaluable in developing patient treatments in the future.

NVIDIA Supercharges Hopper, the World’s Leading AI Computing Platform

NVIDIA today announced it has supercharged the world’s leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.

Revolutionizing Bioscience Research: Creating an Atlas of the Human Body

Making healthcare and life science (HCLS) discoveries is time-consuming and requires considerable amounts of data. HPC enterprise infrastructure with AI and edge to cloud capabilities is required for biomedical research to make creating a human atlas of the body possible. The HPE, NVIDIA and Flywheel collaboration using the latest technologies designed for HCLS promise to transform biomedical research.

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform.

Exxact Partners with Run:ai to Offer Maximal Utilization in GPU Clusters for AI Workloads

Exxact Corporation; a leading provider of high-performance computing (HPC), artificial intelligence (AI), and data center solutions; now offers Run:ai in their solutions. This groundbreaking Kubernetes-based orchestration tool incorporates an AI-dedicated, high-performant super-scheduler tailored for managing GPU resources in AI clusters.

AMAX Launches GPU Servers Powered by Intel’s Newest Data Center GPU Flex Series for AI, Gaming, & Media Streaming

AMAX, a leading provider of turnkey rack-scale High Performance Computing (HPC) solutions, Deep Learning/AI applications and server appliance manufacturing, announces the new AceleMax X-122-Flex server solution featuring Intel’s next-generation Data Center GPU Flex Series, (formerly code-named Arctic Sound-M), providing the capability of a graphics processing (GPU) solution handling high density and complex workloads targeted towards media delivery, cloud gaming, AI, metaverse, and other emerging visual cloud use cases.

NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing

To power the next wave of AI data centers, NVIDIA today announced its next-generation accelerated computing platform with NVIDIA Hopper™ architecture, delivering an order of magnitude performance leap over its predecessor. Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago.