insideAI News Latest News – 6/6/2023

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Cloudera Enables Trusted, Secure and Responsible Artificial Intelligence at Scale

Rob Bearden, CEO of hybrid data company Cloudera, outlined a path for enterprises to benefit from secure, trusted and responsible AI at scale. A new ready-to-use blueprint for Large Language Model (LLM) helps customers use generative AI based on their own data and their enterprise context with security and governance. 

Cloudera enables customers to manage and unlock value from their data across private and public cloud environments. Cloudera’s open data lakehouse brings together the capabilities of a data warehouse and a data lake to power business intelligence, AI and machine learning (ML) solutions. At the same time, companies have the flexibility to deploy these solutions across the private and public cloud of their choice with the identical experience.

“This immense data under management places Cloudera in an unparalleled position to drive generative AI-based applications based on Open Data Lakehouse in an enterprise context,” said Rob Bearden. “Generative AI and Large Language Models are only as good as the data they’ve been trained on, and they need the right context. For these models and AI to be successful, it needs to be trusted. And trusting AI starts with trusting your data. The AI market is changing rapidly. The reality is that data and enterprise context will be the constant to success of any LLM or AI models. Cloudera has been helping enterprises gain value from AI and ML for years. We will continue to innovate and invest heavily in our entire product suite so that customers can benefit from trusted, secure and responsible AI-based applications.”

Grounded Generation from Vectara Defines a New Gold Standard for Generative AI Use for Business Data

Vectara, the Generative AI (GenAI) conversational search platform, has established itself as a leading player in GenAI with the release of an all-new “Grounded Generation” capability that all but eliminates hallucinations.  With this release, Vectara empowers developers to rapidly and easily build conversational AI products with world-class retrieval, summarization, and data privacy directly on top of the data that matters to their business. 

“These new features and capabilities make Vectara’s neural retrieval platform among the best in the world,” said CEO and Cofounder Amr Awadallah, “The breakthroughs our team has accomplished over the last eight months are changing the face of AI and how companies can safely use it to expand and improve their value propositions. The new Summarization feature provides organizations with trustworthy ChatGPT-like generated answers, stacked on top of our Hybrid Search enhancements for superior search efficacy. This release makes Vectara one of the most advanced companies in the world in information retrieval and Grounded Generation.”

Domino Summer 2023 Release Delivers Enterprises Capabilities to Responsibly Build AI Breakthroughs at Lower Costs

Domino Data Lab, provider of a leading Enterprise MLOps platform trusted by over 20% of the Fortune 100, announced new MLOps capabilities in its Summer 2023 release to rapidly develop and deploy cutting-edge AI, including custom large language models and generative AI applications, responsibly at a fraction of the cost. With this expansion, Domino continues to democratize access to the latest AI ecosystem advances. Enterprises can now further fast-track cutting edge in-house AI development using a new project hub, transparent AutoML and pre-trained foundation models in new enhanced SaaS offerings. Domino’s new Model Sentry helps them enforce responsible AI practices and eliminate AI development overspend.

With AI innovation at an all-time high premium, 46% of CDOs and CDAOs say they do not have the governance tools they need. Not surprisingly, the most innovative enterprises struggle to securely and cost-effectively develop generative AI and large language models (LLMs) at the pace that leaders demand. Domino now addresses these concerns by enabling data science teams to rapidly deploy and manage custom models with Domino, streamline operations while adhering to responsible AI principles and cost-effective approaches to building and maintaining powerful models.

Latest Couchbase Capella Release Features New Developer Platform Integrations and Greater Enterprise Features

Couchbase, Inc. (NASDAQ: BASE), the cloud database platform company, announced a broad range of enhancements to its industry-leading Database-as-a-Service Couchbase Capella™. The newest release of Capella will be accessible by popular developer platform Netlify and features a new Visual Studio Code (VS Code) extension, making it easier for developers and development teams to build modern applications on Capella, streamline their workflows and increase productivity. Coinciding with National Cloud Database Day, Couchbase is also extending its enterprise deployability and introducing new features, allowing customers to move more applications to Capella with a lower TCO.

“We continue to broaden the Capella capabilities and make it easier for new developers to come on board and take advantage of our industry-leading cloud database platform,” said Scott Anderson, SVP of product management and business operations at Couchbase. “Development teams can get started with Capella more quickly and do more with our cloud database platform, improving efficiency and productivity. And for operations teams, Capella becomes even easier to deploy and manage while broadened enterprise capabilities handle more workloads at a fraction of the cost compared to other document-based DBaaS offerings.”

Intelligent Data Extraction with NewgenONE OmniXtract 4.0

Newgen Software, a global provider of digital transformation platform, NewgenONE, has launched OmniXtract 4.0, an upgraded version of its intelligent document extraction software. The latest version of OmniXtract leverages AI/ML capabilities to deliver high extraction accuracy and offers enhanced user interface. Also, it boasts an improved extraction engine and a microservices-based architecture.

“Today, the volume of documents continues to skyrocket, presenting significant hurdles for enterprises to seek crucial insights. The latest version of OmniXtract with AI/ML capabilities automates document-centric processes, enabling enterprises to streamline operations, increase efficiency, and accelerate digital initiatives,” said Mr. Varun Goswami, VP – Product Management, Newgen Software.

Qumulo’s New Software Enhancements Boost Customers’ Storage Efficiency as They Scale and Never Have to Migrate Again

Qumulo, the simple way to manage exabyte-scale data anywhere, announced new capabilities that eliminate the pain of data migration during hardware refreshes and boost customers’ storage efficiency as their clusters grow. While new storage technology is continuously being made available, the burden of migration often discourages customers from taking advantage of the latest and greatest innovations. Qumulo’s new Transparent Platform Refresh (TPR) feature lets customers effortlessly swap old appliances for new ones with no disruption to their end users and without having to undergo a time-consuming and expensive migration. The company’s new Adaptive Data Protection (ADP) feature enables customers to adjust their data protection configurations as their clusters grow, often reclaiming hundreds of terabytes of usable space as a result and optimizing their storage investment as IT budgets tighten.

On top of more efficient scaling and seamless hardware refreshes, Qumulo now runs on the highly-performant, HPE Alletra 4110 data storage server, helping customers address rapidly growing artificial intelligence (AI) and machine learning (ML) use cases and other performance-intensive workloads. As customers become more data-driven, performance requirements become more important— balancing speed, efficiency, security, and flexibility in the simplest way possible. With Qumulo on the HPE Alletra 4110 data storage server, customers can gain more insight from their data without the performance challenges and management overhead that often comes with general-purpose storage solutions.

“Many organizations are facing economic uncertainty and the flexibility to adjust their storage needs responsively is crucial,” said Ryan Farris, Vice President of Product, Qumulo. “With more platform options and capabilities to boost efficiency and simplify refreshes at scale, our customers can ensure their end-users and workloads are fully supported even in a turbulent macroeconomic environment.”

Leena AI unveils the future of work, powered by its proprietary LLM, WorkLM

Leena AI, the company revolutionizing enterprise employee experience, announced WorkLM, the company’s proprietary large language model (LLM) built especially for enterprise employee experience. WorkLM is poised to redefine how employees engage with work, delivering a transformative impact on productivity, efficiency and overall work satisfaction.

WorkLM harnesses Leena AI’s breakthrough language model architecture to provide an unparalleled predictive text generation capability. With its advanced ability to produce human-like responses in context, WorkLM is an indispensable tool for a wide range of tasks, from auto-completing emails to generating comprehensive reports and providing recommendations from existing enterprise-wide business performance data.

With an impressive learning capacity of 7 billion parameters, WorkLM is meticulously designed to understand complex enterprise data and generate intricate text. WorkLM’s substantial ‘brain capacity’ enables it to discern the finest nuances of language, consistently producing high-quality responses that adapt to diverse contexts. WorkLM empowers employees with a versatile toolset to accomplish tasks with exceptional precision and speed.

At the core of WorkLM lies its powerfully unique training dataset composed of 2TB of curated proprietary data acquired by Leena AI over the past 7 years. This imparts WorkLM with an unrivaled grasp of business language and contexts, enabling it to comprehend complex challenges. Leena AI ensures comprehensive enterprise data security and privacy protection, eliminating unintended machine learning access and exposure concerns. 

“Our unveiling of WorkLM represents a pivotal moment in the future of work,” stated Adit Jain, co-founder and CEO of Leena AI. “With WorkLM’s revolutionary capabilities, we proudly lead the charge in transforming the enterprise employee experience. WorkLM empowers enterprises to achieve unprecedented levels of productivity and efficiency. It is an exciting time as we set out to offer personalized, responsive solutions that will revolutionize the way employees engage with their work, driving remarkable business growth.”

Precisely Advances Leading Data Quality Portfolio, Providing Unparalleled Support to Customers on their Journey to Data Integrity

Precisely, a leader in data integrity, announced a series of innovations to its industry recognized data quality portfolio. The announcement underscores the company’s continued commitment to helping organizations on their path to data integrity – empowering data leaders and practitioners to better understand their data and ensure it is accurate, consistent, and contextualized for confident decision-making.

“Advanced data programs ultimately rely on high-integrity data to achieve successful outcomes, and ensuring that your data is accurate, consistent, and contextualized is a critical step on the path to building that trust,” said Emily Washington, SVP – Product Management at Precisely. “We are proud to continue to evolve our unique blend of software, data, and strategic services to meet customers wherever they are on their data integrity journey and help them to stay agile in the dynamic market landscape.”

Instabase Launches New Suite of Generative AI Tools to Democratize Access to Content Understanding  

Instabase, a leader in applied AI for the enterprise, announced the launch of AI Hub, a repository of AI apps focused on content understanding and a set of generative AI-based tools. With one of the first apps in Instabase AI Hub, Converse, any individual can instantly have interactive conversations, get answers to questions, summarize, and more from content such as documents, spreadsheets, and even images. From tax files to insurance claims to receipts, invoices, customer data, and more – AI Hub enables anyone to chat with their content and receive answers as if speaking to a knowledgeable expert on the material.  

“We’re entering into a period of time that will be known for AI advancement and innovation,” said Anant Bhardwaj, the founder and CEO of Instabase. “AI Hub is a natural extension of the innovation that has always been core to Instabase. We’re excited that with these new advancements, users around the world can now leverage this technology for nearly any use case.”  

ArangoDB Boosts Performance and Usability Across Search, Graph, and Analytics with Release of ArangoDB 3.11

ArangoDB, the company behind the graph data and analytics platform, announced the GA release of ArangoDB 3.11 to accelerate its performance across search, graph, and analytics use cases. ArangoDB 3.11 includes performance improvements for ArangoSearch, ArangoDB’s natively-integrated full-text search and ranking engine, as well as new functionality to its web interface to simplify the database’s operations. 

“ArangoDB 3.11 is designed to take the capabilities of advanced search and analytics to new heights, while also introducing a wealth of new performance, usability, and operational improvements,” said Jörg Schad, PhD, CTO at ArangoDB. “With today’s release, ArangoDB runs even faster and is more intuitive to use, allowing our customers and community to continue to unlock insights that help them optimize decision-making and accelerate innovation.”

Exasol Unveils the No-Compromise Analytics Database Unlocking Greater Productivity, Cost-Savings, and Flexibility

Exasol unveiled its no-compromise analytics database, which delivers more productivity, savings, and flexibility for enterprises to better manage data in the cloud, SaaS, on-premises, or hybrid. With processing times up to 20 times faster than any other analytics database, Exasol provides an unmatched price/performance ratio, helping customers achieve 320% ROI in reduced licensing, implementation, maintenance, and training costs. Businesses interested in trying Exasol in their own tech stack with their own data can do so at no cost for a limited time through its Accelerator Program.

“Exasol believes customers shouldn’t ever have to make compromises with their analytics databases, especially during these times of economic uncertainty and reduced IT budgets. This is why our offering allows users to see significant performance and efficiency gains, while working within their budgets and existing tech environments,” said Joerg Tewes, CEO of Exasol. “We have hundreds of global customers using Exasol with extremely complex data, at scale. From financial services and retail customers reducing queries from hours to seconds, to agriculture firms working with complicated models supporting DNA sequencing, our customers spend more time analyzing and optimizing with less time and headcount.”

Predibase Empowers Any Engineer to Build Their Own GPT With Support for Large Language Models

Predibase, the low-code declarative ML platform for developers, announced the general availability of its platform, adding new features for large language models and introducing free trial editions.

Predibase makes the extremely powerful but proprietary declarative ML approaches adopted by companies like Uber, Apple and Meta available to a much wider audience. In production with Fortune 500 organizations and high-growth startups like Paradigm and Koble.ai, the proven Predibase platform enables developers and data scientists alike to quickly and easily build, iterate and deploy sophisticated AI applications without the need to learn how to use complex ML tools or assemble low-level ML frameworks. Teams simply define what they want to predict using Predibase’s cutting-edge large AI models and the platform does the rest. Novice users can leverage recommended model architectures, while expert users can finely tune any model parameter. As a result, Predibase cuts the time to deploy ML-powered applications from months to days. Since coming out of stealth, over 250 models have been trained on the platform.

“Every enterprise wants to gain a competitive edge by embedding ML into their internal and customer-facing applications. Unfortunately, today’s ML tools are too complex for engineering teams, and data science resources are stretched too thin, leaving the developers working on these projects holding the bag,” said Piero Molino, co-founder and CEO of Predibase. “Our mission is to make it dead simple for novices and experts alike to build ML applications and get them into production with just a few lines of code. And now we’re extending those capabilities to support building and deploying custom LLMs”

Newest Genesys Generative AI Capabilities Boost Power of Experience Orchestration

Genesys® announced expanded generative AI capabilities for experience orchestration, helping organizations unlock deeper customer and operational insights using the power of Large Language Models (LLMs) as a force multiplier for employees. Now with auto-summarization for Agent Assist, the Genesys Cloud CX™ platform helps organizations drive increased quality, speed and accuracy by enabling employees to efficiently capture conversational intelligence from digital and voice interactions.

The latest generative AI addition to the platform deepens Genesys AI’s expansive predictive, conversational language processing and analytics capabilities. This provides a powerful foundation for organizations to continuously improve customer and employee experiences through smarter automation, personalization and optimization.

“We’ve long used large language models within Genesys AI to help organizations proactively orchestrate experiences that lead to stronger customer and employee outcomes,” said Olivier Jouve, chief product officer at Genesys. “Through responsible development that responds to our customers’ needs, we’re accelerating our pace of innovation with the latest generations of generative AI to help organizations gain greater value from their data, rapidly create new content and break language barriers. We’re also considering the roles and expertise we may need to fuel our R&D strategy for the future, like prompt engineering and curation.”

One AI Unveils BizGPT, Empowering Brands to Provide Users with Precise Responses Based Solely on their Content

One AI, a leading name in Generative AI for businesses, announced BizGPT, an innovative tool that equips businesses of all sizes with the ability to swiftly deploy a unique, intelligent conversational assistant. This groundbreaking functionality allows any product, service, or business to provide users with intuitive access to the precise information they need through chat interface.

One AI’s service plunges into complex documents and content, processing unlimited inputs. It restricts its responses to the synced content, diplomatically declining unrelated requests. Connection to a company’s knowledge base is seamless through a One AI collection, enabling the AI to sift through and insert relevant information into interactions. BizGPT integrates with diverse content sources, including knowledge bases, Github repositories, multimedia content, websites, customer service chats, and a range of documents including financial reports, salary slips, or invoices. The product ensures accurate, context-sensitive responses, providing source references to uphold transparency. This commitment to ‘alignment to source’ serves as a quality check, ensuring AI outputs remain accurate, prevent misinformation, and retain their relevance and applicability.

“Every Business Deserves Its Own GPT. We truly believe that each business should have the capability to quickly deploy their unique GPT for their users,” stated Amit Ben, founder & CEO of One AI. “BizGPT is designed to foster personalized and streamlined interactions, bridging the divide between users and the information they need.”

TruLens for LLM Applications Launches – Evaluate and Track Large Language Model Application Experiments

TruEra, which provides software to test, debug, and monitor ML models across the full MLOPs lifecycle,  launched TruLens for LLM Applications, the first open source testing software for apps built on Large Language Models (LLMs) like GPT. LLMs are emerging as a key technology that will power a multitude of apps in the near future – but there are also growing concerns about their use, with prominent news stories about LLM hallucinations, inaccuracies, toxicity, bias, safety, and potential for misuse.

“TruLens feedback functions score the output of an LLM application by analyzing generated text from an LLM-powered app and metadata,” explained Anupam Datta, Co-founder, President and Chief Scientist at TruEra. “By modeling this relationship, we can then programmatically apply it to scale up model evaluation.”

Alteryx Announces New Generative AI Capabilities to Supercharge Analytics Democratization 

Alteryx, Inc. (NYSE: AYX), the Analytics Cloud Platform company, announced Alteryx AiDIN, the industry’s first engine that combines the power of artificial intelligence (AI), machine learning (ML), and generative AI with the Alteryx Analytics Cloud Platform to accelerate analytics efficiency and productivity. Alteryx now brings the most advanced models and methods to more users across the organization, enabling anyone to capture the competitive advantage of AI and ML.

“With generative AI, users unlock an entirely new way of using insights to transform their business and solve their biggest challenges,” said Suresh Vittal, chief product officer at Alteryx. “With these game-changing Alteryx AiDIN capabilities, customers can intuitively infuse data-driven insights into every decision across every function, while maintaining governance over their analytics processes.”

Pega Announces Pega GenAI to Infuse Generative AI Capabilities in Pega Infinity ’23

Pegasystems Inc. (NASDAQ: PEGA), the low-code platform provider empowering the world’s leading enterprises to Build for Change®, announced Pega GenAI™ – a set of 20 new generative AI-powered boosters to be integrated across Pega Infinity™ ‘23, the latest version of Pega’s product suite built on its low-code platform for AI-powered decisioning and workflow automation. 

“We’ve added a staggering number of powerful generative AI-powered boosters across Pega Infinity to help organizations quickly leverage the power of generative AI to work faster and more efficiently,” said Kerim Akgonul, chief product officer, Pega. “Our clients will have the flexibility and security to power our generative AI features and build their own using their large language models of choice. The Pega GenAI boosters released in Pega Infinity ‘23 are just the start — we plan to add new ones on a regular basis as we continue to evaluate new ways to responsibly and securely leverage generative AI and as new models come to market.”

Teradata and Dataiku Strengthen Integration to Deliver AI at Scale

Teradata (NYSE: TDC) announced new ClearScape Analytics capabilities which are designed to allow enterprise customers to import and operationalize Dataiku AI models inside the Vantage analytics and data platform. With these new capabilities, Teradata expects to integrate and operationalize Dataiku models at scale. This combination of Dataiku and Teradata’s ClearScape Analytics empowers customers to accelerate digital transformations and deliver AI-led business value.

The collaboration between Teradata and Dataiku intends to solve these challenges with an all-in-one solution that enables users, of any skillset, to prepare, train, and operationalize AI models at scale. Using Dataiku’s intuitive front-end user-interface and no-code functionality with Teradata Vantage’s advanced analytics capabilities, ClearScape Analytics, customers are enabled to put more AI models into production, faster, and to rapidly scale the usage of those models across an organization.

“AI is no longer optional for businesses that want to compete – and win – in today’s marketplace,” said Hillary Ashton, Chief Product Officer at Teradata. “To get ahead of the pack, businesses need AI partners that are knowledgeable, reliable and trusted throughout the entire AI lifecycle. Teradata’s long-standing relationship with Dataiku provides that support, from start to finish. Whether users are data scientists or non-technical, our continued alignment gives everybody the power to develop an AI model with speed and scalability – all in one place.”

Versium’s New Data Prep Solution Fixes Data to Drastically Improve AI Modeling and Marketing Performance

Research shows that bad data costs U.S. businesses more than $3 trillion per year – dragging down performance of marketing campaigns and ROI. Versium, a leading data technology company, announced the launch of its new Data Prep product – a solution that empowers marketers to quickly fix massive amounts of data at scale so it can be deployed more effectively in all data-driven marketing activities. 

Versium’s Data Prep is designed for businesses that need to manage and fix large amounts of data but don’t have the resources or expertise to do so effectively. Data Prep rapidly diagnoses large volumes of data and automatically fixes errors, such as missing fields, inconsistent formatting, typos and more, preparing data for further enrichment or even AI models, greatly increasing the ability to reach a target audience across all channels. Data Prep uses AI models to parse identity data such as location, job titles and names, to ensure consistency across large volumes of data.

“Data – especially first-party data – is everything in today’s digital landscape with the rise of AI. But there are myriad scenarios where bad data can be captured or entered, and we know that dirty data ‘in’ translates to dirty data ‘out,’” said Kevin Marcus, co-founder and CTO of Versium. “Data Prep empowers anyone to automate the data cleansing process – the first step in the data journey for most marketers looking to unlock better insights or implement AI in their marketing strategies. Data Prep can be accessed through our REACH UI or directly integrated into your own data pipelines through our APIs.”

Arcitecta and Spectra Logic Unveil High-Performance Scale-Out NAS and Object Storage Solutions for Complete Data Lifecycle Management and Massive Cost Savings    

Arcitecta, a creative and innovative data management software company, and Spectra Logic, a leader in data management and data storage solutions, announced that they have teamed to deliver two groundbreaking solutions that simplify data lifecycle management and accelerate performance speed, innovation, and business success. The Arcitecta Mediaflux + Spectra BlackPearl NAS solution provides high-performance scale-out NAS and the Arcitecta Mediaflux  + Spectra BlackPearl Object Storage solution provides archive economics, high availability, enterprise-grade data protection and massive cost savings. 

Today’s data-driven organizations must process huge amounts of data to accelerate innovation, re-engineer operations and facilitate more efficient service delivery models. Computing performance is critical for enhancing agility and gaining the competitive edge necessary to drive business growth and success. The combined Mediaflux and Spectra BlackPearl solutions are designed to provide exceptional performance, scale, security, and efficiency, enabling data to be processed quickly by any NFS, SMB or S3 application or workflow. 
 
“The combined solutions offer unprecedented high performance, scalability, security, efficiency, and significant cost savings,” said Matt Starr, CTO of Spectra Logic. “With Arcitecta’s innovative data management solutions and Spectra Logic’s technology strengths and capabilities, customers can cost-effectively manage massive data volumes in powerful new ways – a game-changer in the data storage industry.” 

Cyara Announces OpenAI GPT-3 Integration to Accelerate Conversonal AI Chatbot Training and Testing

Cyara, the creator and leader of the Customer Experience (CX) Assurance category, announced its integration of OpenAI’s GPT-3, which will accelerate the generation of training and testing data for Cyara Botium, the company’s one-stop solution for comprehensive, automated chatbot and conversational AI CX testing and assurance. By integrating OpenAI’s GPT-3, it allows enterprises to accelerate the development of their chatbots and voicebots while simultaneously improving chatbot quality. 

“Cyara is leading the charge in delivering exceptional conversational AI experiences by recognizing the power of large language models (LLMs). This integration is another example of our commitment to delivering cutting-edge chatbot testing solutions to our customers,” said Christoph Börner, Senior Director, Digital at Cyara. “Cyara’s integration of GPT-3 for training and testing conversational AI has not only elevated the industry standard for delivering exceptional chatbot experiences but has also played a pivotal role in shaping the future of CX.” 

Red Hat OpenShift AI Accelerates Generative AI Adoption Across the Hybrid Cloud

Red Hat, Inc., the world’s leading provider of open source solutions announced new capabilities for Red Hat OpenShift AI. Building and expanding upon the proven capabilities of Red Hat OpenShift and Red Hat OpenShift Data Science, Red Hat OpenShift AI provides a consistent, scalable foundation based on open source technology for IT operations leaders while bringing a specialized partner ecosystem to data scientists and developers to capture innovation in AI. To that end, Red Hat OpenShift AI underpins the generative AI services of IBM watsonx.ai, IBM’s artificial intelligence platform designed to scale intelligent applications and services across all aspects of the enterprise, fueling the next generation of foundation models.

As Large Language Models (LLMs) like GPT-4 and LLaMA become mainstream, researchers and application developers across all domains and industries are exploring ways to benefit from these, and other foundation models. Customers can fine tune commercial or open source models with domain-specific data to make them more accurate to their specific use cases. The initial training of AI models is incredibly infrastructure intensive, requiring specialized platforms and tools even before serving, tuning and model management are taken into consideration. Without a platform that can meet these demands, organizations are often limited in how they can actually use AI/ML. 

OpenShift AI addresses these challenges by providing the infrastructure consistency across training, deployment and inference to unlock the potential of AI. 

“Foundation models provide real, tangible benefits to enterprises when it comes to harnessing the benefits of AI, but they still require investment in training and fine-tuning to meet the unique needs of an enterprise,” said Chris Wright, Chief Technology Officer and senior vice president, Global Engineering, Red Hat. “Red Hat’s vision for enterprise AI builds on this existing reality with Red Hat OpenShift AI, which provides a flexible and scalable foundation to train, maintain, fine-tune and actually use foundation models in production. Best of all, OpenShift AI is still OpenShift, meaning that IT organizations trust it and understand it, and can extend their AI/ML operations from meeting today’s needs to tomorrow’s.”

DDN QLC SSD Storage Delivers 10X Speed for AI and Data Centers at Any Scale

DDN®, a leader in artificial intelligence (AI) and multi-cloud data management solutions, announced a major breakthrough for its all flash and hybrid storage solutions. DDN’s parallel file system technology, combined with AI and data center specific data compression, delivers the highest performance efficiency straight into generative AI, machine learning and other enterprise high-performance applications.

Eliminating the need for complex networking and heavily bottlenecked performance found in other data storage solutions, DDN’s new AI400X2 QLC and hybrid storage arrays combine DDN’s parallel file system with novel client-side data compression, increasing performance by 10x, growing effective capacity by up to 15x and reducing data center footprint by 2x.

By owning and optimizing the entire data path, DDN appliances directly impact application performance. In addition, the platform requires less infrastructure, reduces power draw and lowers the consumption of data center real estate.

“Today’s QLC scale-out NAS systems offer low cost and high capacity, but they are extremely inefficient with IOPS, throughput and latencies, making them unusable for high-performance environments such as AI, machine learning, and real-time applications,” said Dr. James Coomer, SVP of Products, DDN. “Our parallel file system and data compression technologies which power DDN’s new AI400X2 QLC and hybrid storage arrays, solve the challenges in these at-scale and high-performance environments, delivering a magnitude of improvements and benefits for our customers.”

SingleStore Launches MongoDB API to Power AI and Real-Time Analytics on JSON

SingleStore, the cloud-native database built for speed and scale to power real-time applications, announced the launch of SingleStore Kai™ for MongoDB, a new API that turbocharges real-time analytics on JSON (JavaScript Object Notation) and vector based similarity searches for MongoDB based AI applications — without the need for any query changes or data transformations.

SingleStoreDB is a real-time distributed SQL database combining analytical and transactional workloads in one unified platform. In a new era of the ever increasing adoption of AI, making analytics real time and actionable is even more imperative. A vast majority of data accumulated in the world today is in JSON format, and MongoDB has grown to be one of the most widely adopted NoSQL databases to store and process JSON — powering a variety of use cases across martech, IoT, gaming, logistics, social media, e-commerce and content management applications.

However, document databases are not optimized for analytics, and users often experience delays or lagging query performance attempting to perform analytics on JSON data. SingleStoreDB, by contrast, is architected to power real-time analytics on transactional data, enabling users to drive ultra-fast analytics on both structured and semi-structured (JSON) datasets. The new API is MongoDB wire protocol compatible, and enables developers to power interactive applications with analytics with SingleStoreDB using the same MongoDB commands.

“The demand for real time analytics is undeniable and critical to today’s economy,” said Raj Verma, CEO, SingleStore. “With SingleStore Kai, we’re enabling any developer using MongoDB’s NoSQL platform to use SingleStore’s SQL analytics data platform, at orders of magnitude improved performance, without changing a line of code.”

Arize AI Launches LLM Observability Tool

Arize AI, a market leader in machine learning observability, debuted new capabilities for fine tuning and monitoring large language models (LLMs). The offering brings greater control and insight to teams looking to build with LLMs. As the industry re-tools and data scientists begin to apply foundational models to new use cases, there is a distinct need for new LLMOps tools to reliably evaluate, monitor, and troubleshoot these models. According to a recent survey, 43% of machine learning teams cite “accuracy of responses and hallucinations” as among the biggest barriers to production deployment of LLMs.

“Despite the power of these models, the risk of deploying LLMs in high risk environments can be immense,” notes Jason Lopatecki, CEO and Co-Founder of Arize. “As new applications get built, Arize LLM observability is here to provide the right guardrails to innovate with this new technology safely.”

Airbyte No-Code Builder Revolutionizes Data Integrations, Creates Connectors in Just Minutes

Airbyte, creators of the open-source data integration platform, announced availability of its new no-code connector builder that makes it possible to easily and quickly create new connectors for data integrations. The builder enables non-engineers, such as data analysts, to create an extract, load, transform (ELT) connector within just five minutes – a process that traditionally could take more than a week.

“With businesses adding more data from increasingly diverse sources for analysis and decision-making, we’re making it easy to create custom connectors to serve every possible need,” said John Lafleur, co-founder and chief operating officer, Airbyte. “The combination of our new builder along with our open-source model means more data connectors for our user community of more than 10,000.”

Helpless Chatbots Face Certain Death as the Era of LLM-powered AI Begins

Quiq, the technology company creating the future of conversations between businesses and their customers, announced the release of Conversational Customer Experience (CCX), the next generation of conversational AI based upon Large Language Models (LLMs). Mindless chatbots throughout the world heaved a sigh of defeat, realizing that their days were numbered. Simultaneously, consumers cheered at the news of smarter communications with their favorite brands in the future.

“We’ve all seen the amazing ability of ChatGPT to read and write language,” said Quiq founder and CEO Mike Myer. ”ChatGPT has the potential to revolutionize internet searching, but many business leaders worry it’s not fit to answer their customers’ questions. That’s because customer service issues require 100% accurate answers, not guesses based upon stale data from the internet, which is all ChatGPT knows. It can’t help customer experience leaders improve their current chatbot experience. But there is a solution – Quiq Conversational Customer Experience.”

SnapLogic Unveils SnapLabs for Exclusive Access to Cutting Edge Integration Solutions

SnapLogic, a leader in intelligent integration and enterprise automation, announced the launch of SnapLabs, a dedicated environment for the SnapLogic community to experience unreleased products and features. SnapLabs will enable users to gain early access to cutting-edge solutions and influence the company’s roadmap by providing valuable feedback on product offerings before they are publicly released. 

SnapLabs will host multiple new product releases and features on an ongoing basis. SnapLabs’ debut product is SnapGPT, the industry’s first Generative AI solution that allows anyone to integrate  data and applications in any language. Announced earlier this year, this groundbreaking addition to the SnapLogic platform leverages AI to quickly integrate and automate business processes using natural language prompts, enabling users to streamline data integration, application integration, and API Management. Built on six years of AI and ML research, SnapGPT empowers users to create integration processes more efficiently and effectively than ever before. 

“SnapLogic is committed to delivering cutting edge innovation through best-in-class products, services, and technology,” said Jeremiah Stone, Chief Technology Officer, SnapLogic. “From pioneering the industry first generative AI solution to the launch of this new SnapLabs testing environment, we are dedicated to developing solutions that truly meet the needs of our customers and the industry to transform the future of integration.” 

Elastic Unveils the Elasticsearch Relevance Engine for Artificial Intelligence

Elastic (NYSE: ESTC), the company behind Elasticsearch, announced the launch of the Elasticsearch Relevance Engine (ESRE), powered by built-in vector search and transformer models, designed specifically to bring the power of AI innovation to proprietary enterprise data. ESRE enables companies to achieve breakthrough results by securely taking advantage of all their private structured and unstructured data,  securing and protecting private information more effectively, and optimizing infrastructure and talent resources more efficiently.

“Generative AI is a revolutionary moment in technology and the companies that get it right, fast, are tomorrow’s leaders,” said Ash Kulkarni, CEO of Elastic. “The Elasticsearch Relevance Engine is available today, and we’ve already done the hard work of making it easier for companies to do generative AI right.” 

Messagepoint Unveils AI-generated Content for Customer Communications

Messagepoint Inc announced the availability of AI-powered content generation to support the optimization of customer communications. Leveraging OpenAI’s ChatGPT and GPT-4, this new release augments Messagepoint’s AI engine, MARCIE (Messagepoint Advanced Rationalization and Content Intelligence Engine), to enhance its Assisted Authoring capabilities by providing content rewrite suggestions that align communications with desired reading levels, sentiment and length. The enhanced AI-powered Assisted Authoring is governed by enterprise-grade controls that safely make it faster and easier for marketing and servicing teams to optimize content with recommended changes, while still retaining complete control over the outgoing message.

“Carrying on Messagepoint’s culture of innovation, we are proud to be the first to offer generative AI within the CCM space,” said Steve Biancaniello, founder and CEO of Messagepoint. “We recognize that the important and sensitive nature of these customer communications means content generation must be carefully managed and controlled. Our team of AI experts has implemented carefully designed controls and prompts to harness ChatGPT and GPT-4 for enterprise-grade applications, while ultimately leaving the final decision up to the humans in charge.”

Meet GLO: The Powerful Enterprise Savings Tool

Globality, a leader in AI-powered autonomous sourcing, announced the launch of its next-generation bot, GLO. Powered by ground-breaking generative AI capabilities, GLO represents the latest advancement in intelligent conversational interfaces. This transformative technology-enabled bot efficiently manages an average company spend of $4 billion per customer, empowering workforces to make informed decisions around purchasing products and services and addressing the critical needs of cost reduction and productivity improvement.

As an AI-first company, Globality has dedicated years to developing advanced machine learning capabilities. Now, with the power of generative AI technology, GLO has been supercharged. When it comes to optimizing your company’s spend for maximum returns, GLO does the heavy lifting for you and serves as a highly knowledgeable, incredibly fast-learning, and talented virtual team member, seamlessly integrating intelligence into every step of the buying process.

Joel Hyatt, Co-founder, Chairman, and CEO of Globality, commented, “Companies can’t cut costs meaningfully while keeping their outdated buying processes. How a company spends is crucial to remaining competitive, driving growth, and fostering innovation. None of this is possible if the archaic purchasing process consumes everyone’s energy with minimal impact. GLO not only captures the hearts and minds of its users but also plays a strategic role in enabling intelligent buying decisions, reducing costs, and assuring optimal utilization of funds to drive growth and innovation.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW