Is Generative AI Having Its Oppenheimer Moment?

In this contributed article, Manuel Sanchez, Information Security and Compliance Specialist at iManage, discusses how Generative AI – which burst into the mainstream a little over a year ago – seems to be having an Oppenheimer moment of its own.

Big AIs in Small Devices

In this contributed article, Luc Andrea, Engineering Director at Multiverse Computing, discusses the challenge of integrating increasingly complex AI systems, particularly Large Language Models, into resource-limited edge devices in the IoT era. It proposes quantum-inspired algorithms and tensor networks as potential solutions for compressing these large AI models, making them suitable for edge computing without compromising performance.

The Importance of Protecting AI Models

In this contributed article, Rick Echevarria, Vice President, Security Center of Excellence, Intel, touches on the growing importance of protecting AI models and the data they contain, as this data is often sensitive, private, or regulated. Leaving AI models and their data training sets unmanaged, unmonitored, and unprotected can put an organization at significant risk of data theft, fines, and more. Additionally, poorly managed data practices could result in costly compliance violations or a data breach that must be disclosed to customers.

Why Integration Data is Critical for Powering SaaS Platforms’ AI Features

In this contributed article, Gil Feig, co-founder and CTO of Merge, discusses how integration data can support AI features and why without successful product integrations, successful AI companies would not exist.

Rockets: A Good Analogy for AI Language Models

In this contributed article, Varun Singh, President and co-founder of Moveworks, sees rockets as a fitting analogy for AI language models. While the core engines impress, he explains the critical role of Vernier Thrusters in providing stability for the larger engine. Likewise, large language models need the addition of smaller, specialized models to enable oversight and real-world grounding. With the right thrusters in place, enterprises can steer high-powered language models in the right direction.

Unveiling Jamba: AI21’s Groundbreaking Hybrid SSM-Transformer Open-Source Model

AI21, a leader in AI systems for the enterprise, unveiled Jamba, the production-grade Mamba-style model – integrating Mamba Structured State Space model (SSM) technology with elements of traditional Transformer architecture. Jamba marks a significant advancement in large language model (LLM) development, offering unparalleled efficiency, throughput, and performance.

Heard on the Street – 4/25/2024

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.

Nature Communications Publishes Zapata AI Research on Generative AI for Optimization

Zapata Computing Holdings Inc. (Nasdaq: ZPTA), the Industrial Generative AI company, announced that its foundational research on generator-enhanced optimization (GEO) has been published in the esteemed Nature Communications journal. The research, titled “Enhancing Combinatorial Optimization with Classical and Quantum Generative Models,” introduces Generator-Enhanced Optimization (GEO), a novel optimization method that leverages the power of generative modeling to suggest high-quality candidate solutions to complex optimization problems.

Video Highlights: Gemini Ultra — How to Release an AI Product for Billions of Users — with Google’s Lisa Cohen

In this video presentation, our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, is joined by Lisa Cohen, Google’s Director of Data Science and Engineering, to discuss the launch of Gemini Ultra. Discover the capabilities of this cutting-edge large language model and how it stands toe-to-toe with GPT-4.

What Happens When We Train AI on AI-Generated Data?

In this contributed article, Ranjeeta Bhattacharya, senior data scientist within the AI Hub wing of BNY Mellon, points out that In the world of AI and LLMs, finding appropriate training data is the core requirement for building generative solutions. As the capabilities of Generative AI models like Chat GPT, DALL-E continues to grow, there is an increasing temptation to use their AI-generated outputs as training data for new AI systems. However, recent research has shown the dangerous effects of doing this, leading to a phenomenon called “model collapse.”