AI Beyond LLMs: How LQMs Are Unlocking the Next Wave of AI Breakthroughs

In this contributed article, Dr. Stefan Leichenauer, Vice President of Engineering and lead scientist at SandboxAQ, discusses the profound evolution that is now emerging: Large Quantitative Models (LQMs), designed to tackle complex real-world problems in areas such as healthcare, climate science, and materials design, are set to revolutionize industries and unlock new AI-powered breakthroughs for some of the world’s greatest challenges.

New Release of Graphwise GraphDB Delivers Multi-Method Graph RAG to Accelerate R&D for GenAI Applications, Increase Precision, and Enable Self-Service Data

Graphwise, a leading Graph AI provider, announced the immediate availability of GraphDB 10.8. This release includes the next-generation Talk-to-Your-Graph capability that integrates LLMs with vector-based retrieval of relevant enterprise information and precise querying of knowledge graphs.

UiPath Integrates Anthropic Claude Language Models to Deliver Next Generation AI Assistant and Solutions

UiPath embeds Anthropic’s Claude LLMs to fuel UiPath Autopilot for everyone, Clipboard AI, and a new GenAI healthcare solution to offer customers improved productivity, cost savings, and decision-making capabilities UiPath (NYSE: PATH), a leading enterprise automation and AI software company, announced the integration of Anthropic’s large language model (LLM), Claude 3.5 Sonnet, to deliver new AI features in three key […]

The insideAI News IMPACT 50 List for Q4 2024

The team here at insideAI News is deeply entrenched in keeping the pulse of the big data ecosystem of companies from around the globe. We’re in close contact with the movers and shakers making waves in the technology areas of big data, data science, machine learning, AI and deep learning. Our in-box is filled each day with new announcements, commentaries, and insights about what’s driving the success of our industry so we’re in a unique position to publish our quarterly IMPACT 50 List.

Teradata Makes Real-World GenAI Easier, Speeds Business Value

Teradata (NYSE: TDC) announced new capabilities for VantageCloud Lake and ClearScape Analytics that make it possible for enterprises to easily implement and see immediate ROI from generative AI (GenAI) use cases.

Cloudflare Enhances AI Inference Platform with Powerful GPU Upgrade, Faster Inference, Larger Models, Observability, and Upgraded Vector Database

Cloudflare, Inc. (NYSE: NET), a leading connectivity cloud company, announced powerful new capabilities for Workers AI, the serverless AI platform, and its suite of AI application building blocks, to help developers build faster, more powerful and more performant AI applications. Applications built on Workers AI can now benefit from faster inference, bigger models, improved performance analytics, and more.

Dataiku Launches LLM Guard Services to Control Generative AI Rollouts From Proof-of-Concept to Production in the Enterprise  

Dataiku, the Universal AI Platform, today announced the launch of its LLM Guard Services suite that is designed to advance enterprise GenAI deployments at scale from proof-of-concept to full production without compromising cost, quality, or safety.

Betterworks Elevates Privacy and Reduces Performance Management Tasks With Launch of LLM and AI-Assisted Tools

Bringing together advanced technology and design to boost engagement and performance. Betterworks, a leading performance management software company, is making more strides towards integrating GenAI capabilities responsibly and purposefully throughout the performance review management process with the launch of its private Large Language Model (LLM) to power its award-winning AI tools. Betterworks AI is intentionally […]

Podcast: The Batch 7/31/2024 Discussion

Here is a an example of a wild new experimental feature from Google called NotebookLM. This new Audio Overview feature can turn documents, slides, charts and more into engaging two-party discussions with one click. Two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.

New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems

Today, MLCommons® announced new results for its industry-standard MLPerf®Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.