Welcome to the Generative AI Report round-up feature here on insideAI News with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.
Nexusflow Unveils Open-source Generative AI That Empowers Copilots to Use Tools and Outperforms GPT-4
Nexusflow, a generative AI leader targeting the cybersecurity industry, announced the release of NexusRaven-V2, a 13-billion parameter open-source generative AI model that delivers function calling capability—meaning it can understand human instructions and translate the instructions into precise function/API calls to use a variety of software tools. The function calling capability lies at the core of the OpenAI Assistants API, and serves as the key to enabling copilots and agents to use software tools. Instruction-tuned from the CodeLlama-13B model, NexusRaven-V2 achieves up to a 7% higher tool use success rate than the latest OpenAI GPT-4 model on human-curated general software tool use benchmarks.
Nexusflow’s data curation pipeline combines open datasets and Meta’s Llama ecosystem to curate massive high-quality data, which is used to train NexusRaven-V2. The data curation and training of NexusRaven-V2 do not involve any proprietary LLMs such as OpenAI’s GPT-4, which enables enterprise customers to completely own the model that is used to build copilots and agents. This is especially important for enterprise applications which require up-to-date information, quality, safety and in-depth domain customization. NexusRaven-V2, with its superior quality, has the potential to revolutionize workflow automation on complex software with a significantly smaller model size and higher accuracy.
“NexusRaven-V2 outperforms OpenAI’s GPT-4 model head-to-head for using software tools,” said Jiantao Jiao, CEO and co-founder of Nexusflow. “This validates Nexusflow’s technical capability to deliver enterprise solutions using open-source models.”
MongoDB Announces General Availability of New Capabilities to Power Next-Generation Applications
MongoDB, Inc. (NASDAQ: MDB) announced the general availability of MongoDB Atlas Vector Search and MongoDB Atlas Search Nodes to make it faster and easier for organizations to securely build, deploy, and scale next-generation applications at less cost. MongoDB Atlas Vector Search simplifies bringing generative AI and semantic search capabilities into real-time applications for highly engaging and customized end-user experiences using an organization’s operational data. MongoDB Atlas Search Nodes provide dedicated infrastructure for applications that use generative AI and relevance-based search to scale workloads independent of the database and manage high-throughput use cases with greater flexibility, performance, and efficiency. Together, these capabilities on MongoDB Atlas provide organizations with the required foundation to seamlessly build, deploy, and scale applications that take advantage of generative AI and robust search capabilities with greater operational efficiency and ease of use. To get started with MongoDB Atlas, visit mongodb.com/atlas.
“Customers of all sizes from startups to large enterprises around the world tell us they want to take advantage of generative AI and relevance-based search to build next-generation applications that reimagine how businesses find ways to deeply personalize engagement with their customers, drive increased efficiency through automation, and propel new product development. But these customers know that complexity is the enemy of speed, and the choice of a database is fundamental to ensuring not just the success of an application but also how fast it can be built, deployed, and continually updated with the flexibility and scale needed to meet shifting end-user demands,” said Sahir Azam, Chief Product Officer at MongoDB. “With the general availability of MongoDB Atlas Vector Search and MongoDB Atlas Search Nodes, we’re making it even easier for customers to use a unified, fully managed developer data platform to seamlessly build, deploy, and scale modern applications and provide end users with the types of personalized, AI-powered experiences that save them time and keep them engaged.”
A Dialogue with Luigi Einaudi Is Now Possible, Thanks to Artificial Intelligence
Fondazione Luigi Einaudi Onlus of Turin, Fondazione Compagnia di San Paolo, and Reply present “Pensiero Liberale, Dialogo Attuale” a project that uses artificial intelligence to make the economic thought of one of the most relevant personalities of the 20th century accessible to all through a conversation with a digital version of him.
Using the potential of generative artificial intelligence and the most advanced technologies in hyperealistic 3D, we have created a Digital Human that not only mirrors Luigi Einaudi’s appearance, but has the ability to answer the interlocutor’s questions in a manner which is consistent with the historical figure’s thought. This overcomes all geographical, physical and generational barriers.
The digital representation of Luigi Einaudi is designed to be made available on the Fondazione Einaudi website and accessible from any device. Through keyboard or voice input, students, specialists or anyone interested can start a conversation on key topics that are related to the former President of the Italian Republic’s economic thought: monopoly, competition, monetary and fiscal policy, market, banking, inflation, as well as his biography.
“Artificial intelligence,” said Tatiana Rizzante, CEO of Reply, “is rapidly permeating every aspect of our society, opening the door to new opportunities. Knowledge management is one of the areas to which Reply is paying particular attention. Managing knowledge with artificial intelligence means not only transforming the way data is accessed and information is extracted but also rethinking decision-making processes and the way people work. To support this change, we have conceived and developed MLFRAME Reply, a framework that integrates a proprietary methodology for database analysis with tools for creating conversational generative models applicable to specific domains of knowledge. This same framework today represents the intelligence component of Luigi Einaudi’s digital human, to which our team working on 3D real-time technologies has given a face and an image, this in essence being the creation of hyper-realistic digital humans. This synergy of skills and technologies has allowed us to extend the access to knowledge to a wider audience, creating an engaging connection between culture and people.”
Markets EQ to Fuse Voice Tones with Language in Generative AI Platform for Corporate Communications
Markets EQ introduced a state-of-the-art AI tonal recognition technology for evaluating and actively enhancing executive communications. The announcement, which was made from the floor of IR Magazine’s AI in IR Forum, represents the first-ever platform to make advanced tonal recognition analysis available to IR professionals and investors. As such, Markets EQ provides unprecedented insights into a management teams’ emotional and psychological states during earnings calls and other speaking engagements.
For instance, a recent review of Sam Altman’s keynote speech at OpenAI’s Dev Day presentation on November 6, 2023, showed that in portions of that talk Altman assumed a tone interpreted as fearful by Markets EQ. The incident occurred when Altman was on stage with Microsoft CEO Satya Nadella in what the press reported was an awkward exchange between the two and preceded nearly two weeks of chaos after Altman was terminated and then reinstated as CEO. While no deceptive behavior was detected, other unexpected emotions were demonstrable including fear and disgust.
“Markets EQ represents a significant leap in the realm of investor relations,” says Sean Austin, CEO of Markets EQ. “This tool doesn’t just analyze communications; it delves into the subtleties of executive speech, revealing deeper layers of meaning and intent. For the first time, IROs and investors can gauge not just what is being said, but the underlying confidence and certainty behind these statements, ushering in a new era of transparency and sophistication in corporate communication.”
Baresquare Empowers Online Retailers With Free Custom GPT Tool For Product Performance Optimization
Baresquare, an AI-powered analytics platform, unveiled the ‘eCom Product Analyst,’ a custom GPT tailored to elevate e-commerce product performance analysis. This timely release, coinciding with the crucial holiday shopping season, offers a free tool that delivers daily insights and empowers data-driven decisions, enabling retailers to quickly and effortlessly capitalize on emerging trends and optimize marketing campaigns.
“The holiday season is a pivotal time for online retailers, and while they are inundated with multiple campaigns, the sheer volume of data can be overwhelming. Baresquare’s eCom Product Analyst removes that stress by providing e-commerce managers with immediate action to optimize product performance,” said Georgios Grigoriadis, CEO of Baresquare. “The eCom Product Analyst exemplifies Baresquare’s commitment to delivering actionable insights in clear, concise language, helping ecommerce brands identify and rectify areas for improvement.”
Hitachi Vantara Introduces Pentaho+, A Simplified Platform For Trusted, GenAI-ready Data
Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, Ltd. (TSE: 6501), announced Pentaho+, an integrated platform from the Pentaho software business designed to help organizations connect, enrich, and transform operations with refined, reliable data necessary for AI and Generative AI (GenAI) accuracy. Automating the work of complex data management with powerful self-service and cloud-agnostic solutions, Pentaho+ helps improve data quality by allowing organizations to effectively oversee data from inception to deployment.
“We believe in this age of AI and Machine Learning that if an enterprise isn’t data-fit, it will lose to one who is,” said Maggie Laird, global head of Pentaho Software at Hitachi Vantara. “With Pentaho+, we’re providing the groundwork for universal data intelligence, enabling leaders to provide clean, accurate data with certainty so they can leverage GenAI in ways that can truly impact their business strategy and bottom line.”
Akool’s Generative AI Platform Sets New Standard for Brand Engagement
Akool, the breakthrough Generative AI platform for personalized visual marketing and advertising, launched – setting an industry standard for AI-driven personalization in marketing and advertising. Designed for forward-thinking global brands and innovative marketing creators, Akool fosters brand loyalty, captivates audiences, and significantly increases return on investment (ROI) through immersive brand experiences.
With Akool, marketers and advertisers can create brand and ad campaigns that allow customers to test drive a new car in their neighborhood, hang out with their favorite celebrity, try on holiday makeup or clothing or interact with personalized digital avatars. These are just a few examples of how brands can create immersive marketing campaigns and brand experiences to delight and engage customers.
“In the fast-paced beauty industry, it’s crucial for brands to stand out. With Akool’s Generative AI platform we are using it to develop our holiday hair campaign and the technology has really blown us away! We are looking to Akool to deeply connect with our consumers, taking them on a personalized journey to showcase the transformative power of VOLOOM,” said Patty Lund, founder and CEO of VOLOOM. “What Akool has to offer is redefining the boundaries of advertising for us and will heighten engagement and response from our community of users and influencers alike. To see Akool in action is nothing short of spectacular. It really brings the wow factor to consumers. As a brand, we’re always on the lookout for innovative ways to resonate with our audience, and Akool has truly set the bar high for immersive experience marketing.”
Matillion Adds AI Power to Pipelines with Amazon Bedrock
Data productivity provider Matillion announced the addition of generative artificial intelligence (AI) functionality to its flagship Data Productivity Cloud using Amazon Bedrock. Amazon Bedrock is a fully managed service from Amazon Web Services (AWS) that makes foundation models (FMs) from leading AI companies accessible via an API to build and scale generative AI applications.
A longtime AWS Advanced Tier Services Partner, Matillion adds generative AI integration with a prompt component supporting Amazon Bedrock to enable users to operationalise the use of Large Language Models (LLMs) inside the data pipeline, to address intelligent data integration tasks including data enrichment, data quality and data classification among others.
Ciaran Dynes, Chief Product Officer at Matillion, said: “Integrating AI technologies into our product offering adds a huge amount of firepower for our clients to materially increase their data productivity with intelligent data integration tasks. There was never any question about including AWS in that process. Amazon Bedrock’s ability to leverage all base models – such as Anthropic Claude and Amazon Titan – brings huge opportunities for our clients to get insights from their data at scale, faster.”
Astronomer Accelerates AI Workflows with Integrations for Top LLM Providers
Astronomer, a leader in modern data orchestration, announced a new set of Apache Airflow™ integrations to accelerate LLMOps (large language model operations) and support AI use cases. Modern, data-first organizations are now able to connect to the most widely-used LLM services and vector databases with integrations across the AI ecosystem, including OpenAI, Cohere, pgvector, Pinecone, OpenSearch, and Weaviate.
By enabling data-centric teams to more easily integrate data pipelines and data processing with machine learning (ML) workflows, organizations can streamline the development of operational AI. Astro provides critical data-driven orchestration for these leading vector databases and natural language processing (NLP) solutions, driving the MLOps and LLMOps strategies behind the latest generative AI applications.
DataOps is at the center of all ML operations and is driving forward generative AI and LLM production. As the de facto standard for DataOps, Airflow is the foundation for all data architectures and is already widely used in the construction of LLMs and by thousands of ML teams. With pluggable compute and thousands of integrations in the data science toolkit, Astro (the fully managed Airflow service from Astronomer) is the ideal environment for building and driving ML initiatives.
“Organizations today are already relying on Astro and Airflow to harness the data required to fuel LLMs and AI. With these new integrations, we are now helping organizations realize the full potential of AI and natural language processing, and optimize their machine learning workflows,” said Steven Hillion, SVP of Data & AI at Astronomer. “These integrations put Astro at the foundation of any AI strategy, to better process complex and distributed volumes of data with the open source and proprietary frameworks that drive the current generative AI ecosystem.”
KX LAUNCHES KDB.AI SERVER EDITION FOR ENTERPRISE-SCALE GENERATIVE AI
KX, a global leader in vector and time-series data management, announced the general availability of KDB.AI Server Edition, a highly-performant, scalable, vector database for time-orientated generative AI and contextual search. Deployable in a single container via Docker, KDB.AI Server offers a smooth setup for various environments, including cloud, on-premises, and hybrid systems, allowing businesses to quickly adopt and use its AI capabilities without complex setup processes.
Generative AI promises to fundamentally transform productivity and drive competitive differentiation, yet as evidenced by a recent report by Accenture, while 84% of global C-suite executives believe they must leverage AI to achieve their growth objectives, 76% report they struggle with how to scale. KDB.AI Server solves this problem, giving enterprises the ability to supercharge their AI applications with unparalleled data processing and search functionality that scales to meet the needs of the largest, most complex enterprises.
Built to handle high-speed, time-oriented data and multi-modal data processing, KDB.AI Server seamlessly handles both structured and unstructured enterprise data, enabling holistic search across all data assets with better accuracy and lower cost. Unique among vector databases, KDB.AI enables developers to bring temporal and semantic context and relevancy to their AI-powered applications, giving them a comprehensive data search tool with unequaled flexibility.
Moreover, KDB.AI Server is optimized for Retrieval Augmented Generation (RAG) patterns which ensures that, rather than continuously training or fine-tuning Large Language Models (LLM), developers can bring data relevancy to their prompts delivering better accuracy, lower cost, and less need for GPUs.
Ashok Reddy, CEO, KX: “The debut of KDB.AI Server Edition marks a transformative step in enterprise AI. It’s tailored for a future where data is a strategic powerhouse, enabling businesses to create unique, custom AI solutions from their proprietary data to forge a distinct competitive edge. Blending unparalleled data processing with agility and privacy, KDB.AI Server Edition isn’t just a new product, it’s a leap into the generative AI era, ensuring businesses not only adapt but also thrive and lead in the rapidly evolving AI landscape.”
Caylent Launches MeteorAI to Supercharge GenAI Initiatives from Ideation to Implementation
Caylent, a leading cloud services company helping businesses build competitive IP, announced the launch of MeteorAI by Caylent. Built on Amazon Web Services (AWS), MeteorAI is a proprietary generative artificial intelligence (AI) framework that amplifies the power of company data and expedites the experimentation cycle of generative AI solutions to help organizations accelerate the launch and implementation of new technologies that deliver actual business value.
MeteorAI redefines how customers build bespoke internal and consumer-facing generative AI applications with Caylent’s AWS Data and Cloud Native development teams. It is the culmination of Caylent’s platform, process, and practice expertise in building AI solutions. Within the framework, models and prompts are custom-tuned for each company’s specific business needs using their data. MeteorAI enables customized secure data integrations that continuously ingest, transform, process, and act on data in real time, supporting individual employee inferences and programmatic back-office integrations. While achieving a custom, useful enterprise generative AI solution can require lengthy experimentation and optimization cycles, MeteorAI expedites the time to business value with battle-proven prompt templates for popular use cases, an extensible and modular backend, pre-configured monitoring, a library of pre-configured third-party data sources and integrations, and deep integration with popular authentication and privacy solutions. Finally, a feedback and alignment mechanism helps customers continuously improve the underlying generations.
“As organizations are eager to operationalize the promise of generative AI and translate its potential into real business value, Caylent’s MeteorAI stands as a testament to what’s achievable in generative AI application development,” said Valerie Henderson, President and Chief Revenue Officer, Caylent. “By integrating the strengths of the AWS ecosystem into our innovative approach to AI, Caylent offers organizations the ability to transition from conception to completion at an unprecedented pace. MeteorAI supports numerous use cases, including the development of AI assistants, enterprise knowledge bases, forecasting, recommendation engines, anomaly detection, pattern recognition, and data generation, among many others. Caylent uses MeteorAI internally to power our operations and it has dramatically improved our efficiency. We are so excited to bring this power to our customers and turn their IT into IP that will power the next era of their business growth.”
Qlik Expands Customers’ Ability to Scale AI for Impact with AWS
Qlik® is helping its customers embrace and scale the power of large language models (LLMs) and generative artificial intelligence (AI) with Amazon Web Services (AWS) through new integrations and AI-powered solutions. With its integration with Amazon Bedrock, Qlik Cloud® users can now easily leverage natural language to create new AI-driven insights on AWS with trusted and governed LLMs such as AI21 Labs, Anthropic, Cohere and Meta, Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI companies accessible via an API to build and scale generative AI applications. This integration builds on Qlik’s portfolio of native, engine-level integrations with Amazon SageMaker, Amazon SageMaker Autopilot, and Amazon Comprehend, which are already enabling customers to leverage AI and machine learning (ML) in prediction and model building efforts.
In addition to the new Amazon Bedrock integration, customers like HARMAN and Schneider Electric are benefitting from combining AWS and solutions from Qlik Staige™. Qlik Staige is the company’s holistic set of solutions that help organizations build a trusted data foundation for AI, leverage modern AI-enhanced analytics, and deploy AI for advanced use cases.
“AWS customers are looking at LLMs and generative AI to capture new levels of innovation and productivity in managing data and analytics, and Qlik is focused on delivering such innovations by integrating Qlik Cloud with AWS,” said Itamar Ankorion, SVP Technology Alliances at Qlik. “Adding an integration with Amazon Bedrock further extends our work with AWS across our whole Qlik Staige portfolio, and our continued commitment to future integrations shows customers that they can seamlessly leverage Qlik alongside AWS to drive AI success.”
Neo4j Signs Strategic Collaboration Agreement with AWS to Enhance Generative AI Results While Addressing AI Hallucinations
Neo4j®, a leading graph database and analytics companies, announced a multi-year Strategic Collaboration Agreement (SCA) with Amazon Web Services (AWS) that enables enterprises to achieve better generative artificial intelligence (AI) outcomes through a unique combination of knowledge graphs and native vector search that reduces generative AI hallucinations while making results more accurate, transparent, and explainable. This helps solve a common problem for developers who need long-term memory for large language models (LLMs) that are grounded in their specific enterprise data and domains.
Neo4j also announced the general availability of Neo4j Aura Professional, the company’s fully managed graph database offering, in AWS Marketplace, enabling a frictionless, fast-start experience for developers on generative AI. AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.
Neo4j is a leading graph database with native vector search that captures both explicit and implicit relationships and patterns. Neo4j is also used to create knowledge graphs, enabling AI systems to reason, infer, and retrieve relevant information effectively. These capabilities enable Neo4j to serve as an enterprise database for grounding LLMs while serving as long-term memory for more accurate, explainable, and transparent outcomes for LLMs and other generative AI systems.
Sudhir Hasbe, Chief Product Officer, Neo4j, “Neo4j has been an AWS Partner since 2013 – with this latest collaboration representing an essential union of graph technology and cloud computing excellence in a new era of AI. Together, we empower enterprises seeking to leverage generative AI to better innovate, provide the best outcome for their customers, and unlock the true power of their connected data at unprecedented speed.”
Demostack Unveils AI Data Generator for Effortless and Scalable Customized Demos
Demostack, the place for all things demo, unveiled its AI Data Generator today. This is a significant advancement for demo managers, enabling the creation of customized demos effortlessly and at scale. Now customer-facing teams can deliver professionally tailored demos on every call to drive sales faster down the funnel and win more deals.
With generative AI embedded into Demostack, demo managers can quickly tailor demos to specific segments, personas, and industries. The AI Data Generator populates new products with realistic data, replaces personally identifiable information (PII), and replaces dummy data with smart data.
Jonathan Friedman, co-founder and CEO of Demostack, emphasizes the importance of relatability in demos. “A perfect demo acts as a mirror, reflecting the customer in the product. Our GenAI-driven editing features enable our customers to craft perfect demos for every customer type. In a highly competitive environment, Demostack ensures that demos are professional, predictable, and relevant,” said Friedman.
Vultr and Together AI Partner to Scale GenAI at the Edge
Vultr, the privately-held cloud computing platform, and Together AI, the platform as a service (PaaS) provider unlocking the power of open source large language models (LLMS), have partnered to enable generative AI at scale. Together AI will be leveraging Vultr’s global array of cloud GPUs – which include the NVIDIA GH200 Grace Hopper™ Superchip, HGX H100, A100 Tensor Core GPU, L40S, A40, and A16 – as the compute backbone to enable worldwide influence at the edge.
The momentum that companies like Together AI and Vultr are experiencing reflects the sonic boom unleashed one year ago when OpenAI introduced the world to ChatGPT. The events of the past 12 months have demonstrated to the world the art of the possible. And now, with Together AI providing the platform and Vultr delivering the composable infrastructure, the companies are opening the doors for innovators around the world to leverage generative AI in ways that will fundamentally reinvent businesses.
dotData announces dotData Insight to ideate business hypotheses by combining AI-driven insight discovery and Generative AI
dotData, a pioneer and leading provider of platforms for feature discovery, announced the launch of dotData Insight. The new platform uses an AI-driven insight discovery engine, augmented with Generative AI, to enable enterprises to uncover unique business hypotheses from data for better analytics-driven corporate decisions.
dotData Insight combines two complementary AI technologies: dotData’s AI-driven signal discovery engine from Feature Factory and Generative AI. The former uses dotData’s proprietary AI to automatically discover and evaluate data signals and statistical facts hidden within the vast repositories of enterprise data. The latter helps users translate discovered signals into hypotheses, making interpretation easier. Together, this two-fold technology approach helps business analysts—in departments like marketing and finance—eradicate blind spots that arise in manual reporting and analysis and eliminate arduous tasks involving converting convoluted enterprise data into business signals.
“dotData Insight expands dotData’s vision to propel data-driven digital transformation for all enterprises and accelerate the time-to-value for business leaders,” said Ryohei Fujimaki, Ph.D., founder and CEO of dotData.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.c
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW