Run:ai’s 2023 State of AI Infrastructure Survey Reveals that Infrastructure and Compute have Surpassed Data Scarcity as the Top Barrier to AI Development

The 2023 State of AI Infrastructure Survey, commissioned by Run:ai, sheds light on the growing challenges faced by organizations in AI development. The survey, which was conducted by Global Surveyz Research and gathered responses from 450 industry professionals across the US and Western EU, reveals that infrastructure and compute, chosen by 54% and 43% of respondents respectively, are now the primary hurdles, surpassing data as the key challenge facing AI development. 

10 Tips to Ensure Effective Infrastructure Monitoring 

In this special guest feature, Adrian Phillips who leads product marketing for the Infrastructure Monitoring solutions at Dynatrace, believes there’s too much at stake to allow an oversight to result in a security breach or downtime. In this article, you’ll discover 10 tips to ensure your infrastructure monitoring is effective.

Big Data Industry Predictions for 2023

Welcome to insideAI News’s annual technology predictions round-up! The big data industry has significant inertia moving into 2023. In order to give our valued readers a pulse on important new trends leading into next year, we here at insideAI News heard from all our friends across the vendor ecosystem to get their insights, reflections and predictions for what may be coming. We were very encouraged to hear such exciting perspectives.

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100 Supercomputers Set to Transform AI Landscape

CoreWeave, a specialized cloud provider built for large-scale GPU-accelerated workloads, announced it is among the first to offer cloud instances with NVIDIA HGX H100 supercomputing. CoreWeave, Amazon, Google, Microsoft and Oracle are the first cloud providers included in the launch of this groundbreaking AI platform.

Video Highlights: Modernize your IBM Mainframe & Netezza With Databricks Lakehouse

In the video presentation below, learn from experts how to architect modern data pipelines to consolidate data from multiple IBM data sources into Databricks Lakehouse, using the state-of-the-art replication technique—Change Data Capture (CDC).

DDN Simplifies Enterprise Digital Transformation with New NVIDIA DGX BasePOD and DGX SuperPOD Reference Architectures

DDN®, a leader in artificial intelligence (AI) and multi-cloud data management solutions, announced its next generation of reference architectures for NVIDIA DGX™ BasePOD and NVIDIA DGX SuperPOD. These new AI-enabled data storage solutions enhance DDN’s position as the leader for enterprise digital transformation at scale, while simplifying by 10X the deployment and management of systems of all sizes, from proof of concept to production and expansion.

Video Highlights: Why Does Observability Matter?

Why does observability matter? Isn’t observability just a fancier word for monitoring? Observability has become a buzz word in the big data space. It’s thrown around so often, it can be easy to forget what it even really means. In this video presentation, our friends over at Pepperdata provide some important insights into this this technology that’s growing in popularity.

Cerebras Wafer-Scale Cluster Brings Push-Button Ease and Linear Performance Scaling to Large Language Models

Cerebras Systems, a pioneer in accelerating artificial intelligence (AI) compute, unveiled the Cerebras Wafer-Scale Cluster, delivering near-perfect linear scaling across hundreds of millions of AI-optimized compute cores while avoiding the pain of the distributed compute. With a Wafer-Scale Cluster, users can distribute even the largest language models from a Jupyter notebook running on a laptop with just a few keystrokes. This replaces months of painstaking work with clusters of graphics processing units (GPU).

Myth busting: The Truth About Disaggregated Storage 

In this contributed article, Scott Hamilton, Senior Director, Product Management & Marketing at Western Digital, shows that for large enterprises, CDI enables the intelligent allocation of dynamic resources and that’s a must for controlling costs, boosting performance, optimizing IT resources and maximizing efficiency. However, the rise of any technology often generates some confusion, and this piece will dispel some myths around disaggregated storage.

Pinecone Announces New Features to Lower the Barrier of Entry for Vector Search

Pinecone Systems Inc., a search infrastructure company, announced the release of new features and enhancements that make it significantly easier for developers — regardless of AI or ML experience and background – to get started with vector search for applications such as semantic search and recommendation systems. New features include up to 10x faster indexes, flexible collections of vector data, and zero-downtime vertical scaling.