At 2024 AI Hardware & Edge AI Summit: Gayathri Radhakrishnan, Partner – Investment Team, Hitachi Ventures

At the recent 2024 AI Hardware & Edge AI Summit in San Jose, Calif., I caught up with Gayathri Radhakrishnan, Partner – Investment Team with Hitachi Ventures, a venture firm that looks for the best solutions to address the world’s technical, social and environmental challenges The discussion starts with a brief tour of the company’s mission and market […]

Hammerspace Unveils the Fastest File System in the World for Training Enterprise AI Models at Scale

Hammerspace, the company orchestrating the Next Data Cycle, unveiled the high-performance NAS architecture needed to address the requirements of broad-based enterprise AI, machine learning and deep learning (AI/ML/DL) initiatives and the widespread rise of GPU computing both on-premises and in the cloud. This new category of storage architecture – Hyperscale NAS – is built on the tenants required for large language model (LLM) training and provides the speed to efficiently power GPU clusters of any size for GenAI, rendering and enterprise high-performance computing.

Achieving Faster Time To Insights with Modern Data Pipelines

In this sponsored post, Devika Garg, PhD, Senior Solutions Marketing Manager for Analytics at Pure Storage, believes that in the current era of data-driven transformation, IT leaders must embrace complexity by simplifying their analytics and data footprint. Data pipelines allow IT leaders to optimize data and maximize value for faster analytic insights. When paired with the right storage solution, IT leaders can modernize their pipelines and consolidate data into a central and accessible layer – breaking through silos and delivering the real-time insights that drive true business advantage. Modern data analytics fuel digital-first organizations to unlock faster insights – is your infrastructure keeping pace? 

Hewlett Packard Enterprise Acquires Pachyderm to  Expand AI-at-Scale Capabilities with Reproducible AI  

Hewlett Packard Enterprise (NYSE: HPE) today announced an expansion to its AI-at-scale offerings with the acquisition of Pachyderm, a startup that delivers software, based on open-source technology, to automate reproducible machine learning pipelines that target large-scale AI applications. 

How to Ensure an Effective Data Pipeline Process

In this contributed article, Rajkumar Sen, Founder and CTO at Arcion, discusses how the business data in a modern enterprise is spread across various platforms and formats. Data could belong to an operational database, cloud warehouses, data lakes and lakehouses, or even external public sources. Data pipelines connecting this variety of sources need to establish some best practices so that the data consumers get high-quality data delivered to where the data apps are being built.

eBook: Unlock Complex and Streaming Data with Declarative Data Pipelines 

Our friend, Ori Rafael, CEO of Upsolver and advocate for engineers everywhere, released his new book “Unlock Complex and Streaming Data with Declarative Data Pipelines.” Ori discusses why declarative pipelines are necessary for data-driven businesses and how they help with engineering productivity, and the ability for businesses to unlock more potential from their raw data. Data pipelines are essential to unleashing the potential of data and can successfully pull from multiple sources.

Databricks Announces General Availability of Delta Live Tables

Databricks, the Data and AI company and pioneer of the data lakehouse paradigm, announced the general availability of Delta Live Tables (DLT), the first ETL framework to use a simple declarative approach to build reliable data pipelines and to automatically manage data infrastructure at scale. Turning SQL queries into production ETL pipelines often requires a lot of tedious, complicated operational work. By using modern software engineering practices to automate the most time consuming parts of data engineering, data engineers and analysts can concentrate on delivering data rather than on operating and maintaining pipelines.

5 Easy Steps to Make Your Data Business Ready

In this contributed article, Ayush Parashar Vice President of Engineering at Boomi, discusses five core components to a strong data strategy so businesses can derive insights from and act on the data. As the uses for data continue to grow, businesses must ensure their data is actually usable.

How Governing Observability Data is Critical to ESG Success

In this contributed article, Nick Heudecker, Senior Director of Market Strategy at Cribl, discusses how observability data comprises the logs, events, metrics, and traces that make things like security, performance management, and monitoring possible. While often overlooked, governing these data sources is critical in today’s enterprises. The current state of observability data management is, at best, fragmented and ad hoc. By adopting an observability pipeline as a key component in your observability infrastructure, you can centralize your governance efforts while remaining agile in the face of constant change.

How Yum! Brands Uses Location Data from Foursquare to Make Smarter Decisions

Join this virtual event with a compelling panel of technology leaders to discuss to discover how Yum! Brands and other organizations are leveraging location-based data to boost in-app location accuracy, increase in-store foot traffic, and expand e-commerce business.