insideAI News Latest News – 10/23/2023

In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.

Alluxio Unveils New Data Platform for AI: Accelerating AI Products’ Time-to-Value and Maximizing Infrastructure ROI

Alluxio, the data platform company for all data-driven workloads, introduced Alluxio Enterprise AI, a new high-performance data platform designed to meet the rising demands of Artificial Intelligence (AI) and machine learning (ML) workloads on an enterprise’s data infrastructure. Alluxio Enterprise AI brings together performance, data accessibility, scalability and cost-efficiency to enterprise AI and analytics infrastructure to fuel next-generation data-intensive applications like generative AI, computer vision, natural language processing, large language models and high-performance data analytics.

To stay competitive and achieve stronger business outcomes, enterprises are in a race to modernize their data and AI infrastructure. On this journey, they find that legacy data infrastructure cannot keep pace with next-generation data-intensive AI workloads. Challenges around low performance, data accessibility, GPU scarcity, complex data engineering, and underutilized resources frequently hinder enterprises’ ability to extract value from their AI initiatives. According to Gartner®, “the value of operationalized AI lies in the ability to rapidly develop, deploy, adapt and maintain AI across different environments in the enterprise. Given the engineering complexity and the demand for faster time to market, it is critical to develop less rigid AI engineering pipelines or build AI models that can self-adapt in production.” “By 2026, enterprises that have adopted AI engineering practices to build and manage adaptive AI systems will outperform their peers in the operationalizing AI models by at least 25%.”

“Alluxio empowers the world’s leading organizations with the most modern Data & AI platforms, and today we take another significant leap forward,” said Haoyuan Li, Founder and CEO, Alluxio. “Alluxio Enterprise AI provides customers with streamlined solutions for AI and more by enabling enterprises to accelerate AI workloads and maximize value from their data. The leaders of tomorrow will know how to harness transformative AI and become increasingly data-driven with the newest technology for building and maintaining AI infrastructure for performance, seamless access and ease of management.”

Amplitude Brings Full Power of Digital Analytics to Every Team for Less

Amplitude, Inc. (Nasdaq: AMPL), a leading digital analytics platform, announced the launch of Amplitude Plus, a new self-service offering that puts more capabilities from its platform into the hands of more teams at a lower cost. With Plus, businesses of all sizes gain access to the most powerful features of Amplitude’s Digital Analytics Platform to deliver better digital experiences and drive better outcomes.

“We’ve heard the feedback that the pricing models for data analytics are broken. For the category as a whole, it’s clear that event-based approaches can get expensive fast, pricing isn’t transparent, and the jump from free to paid is often steep,” said Spenser Skates, CEO and co-founder of Amplitude. “With the Plus plan, we’re changing all that. We’re the first company to offer the best of our digital analytics platform in a self-serve way. It’s available at the best price on the market. And we’re making it super-simple to get up and running quickly.”

dbt Labs Announces the Next Generation of the dbt Semantic Layer, Introduced Alongside New Integration with Tableau

dbt Labs, a pioneer in analytics engineering, announced the next generation of the dbt Semantic Layer following its acquisition of Transform in February 2023. The dbt Semantic Layer enables organizations to centrally define business metrics in dbt Cloud and then query them from any integrated analytics tool. This allows organizations to ensure that critical definitions such as “revenue,” “customer count,” or “churn rate” are consistent in every data application.

dbt Labs is also shipping a new integration with Tableau for the dbt Semantic Layer, a game-changing step in the technology’s evolution. With this integration, the thousands of organizations that rely on Tableau’s analytics platform can benefit from business-critical metrics that are consistent, reliable, and reading from a single, verified source of truth. The Semantic Layer also integrates with Google Sheets, Hex, Klipfolio, Lightdash, Mode, and Push.ai with additional integrations planned for the coming months. 

This is an important moment for the dbt Semantic Layer,” said Tristan Handy, founder and CEO of dbt Labs. “First, our customers now have access to the fully integrated best-in-class technology that we acquired in the Transform acquisition. Second, our customers can use the dbt Semantic Layer with two of the most widely used data analysis tools, Tableau and Google Sheets, with additional integrations available right now and more integrations coming soon. With the Semantic Layer, data teams can have confidence that everyone in the business is working from the same, shared definitions.”

EZOPS Launches Pypeline: A Revolutionary Data-First Application

EZOPS, a leading provider of AI-enabled data control, workflow automation, and reconciliation platform, announced the official launch of EZOPS Pypeline™, a groundbreaking application that revolutionizes the way financial institutions optimize their data pipelines. 

In a rapidly evolving landscape, EZOPS remains committed to innovation and has developed Pypeline to meet the dynamic needs of today’s data professionals. Unlike feature-rich, but niche-bound offerings, Pypeline is a highly versatile, SaaS native solution that empowers analysts, planners, operations users, data scientists, and all data adjacent roles with its data-first approach.

Pypeline is built on a robust Python-based framework, leveraging the strength of a large open-source community. It offers enhanced support for unstructured data as compared to traditional SQL-based solutions. By offering streamlined operations along with the capability to handle intricate data transformations and tailor-made business rules, Pypeline provides unparalleled flexibility for data processing and analysis. Pypeline seamlessly integrates with modern analytics tools, making it the best platform for machine learning and AI applications. Its adaptability and versatility enable financial institutions to stay ahead of the competition in an increasingly data-driven world.

“With EZOPS Pypeline, financial institutions can transform their operations and optimize their workflows effortlessly while harnessing the power of their data,” said Sarva Srinivasan, Co-founder and CEO at EZOPS. “Our revolutionary intelligent automation platform has already reshaped how these institutions operate. Pypeline takes it a step further by providing a user-friendly experience and empowering users to streamline their data pipelines and processes with ease.”

GridGain Unified Real-Time Data Platform Version 8.9 Addresses Today’s More Complex Real-Time Data Processing and Analytics Use Cases

GridGain®, provider of a leading unified real-time data platform, announced the release of GridGain Platform v8.9, strengthening its data integration and data hub capabilities. This latest version helps enterprises make their increasingly disparate, diverse, and distributed data more accessible for real-time processing and analytics, all at ultra-low latencies.

GridGain Platform v8.9 includes several new or enhanced integrations, including deeper support for Apache Parquet, Apache Iceberg, CSV, and JSON. These integrations enable enterprises to deliver highly performant real-time analysis across complex data workloads by making enterprise data in data lakes and semi-structured data in non-relational/NoSQL databases more easily accessible for processing. The ability to handle these complex workloads in a single platform helps enterprises simplify and optimize their data architectures to drive the extreme speed, massive scale, and high availability they require.

“GridGain has provided leading enterprises with the capabilities of a unified real-time data platform for years,” said Lalit Ahuja, Chief Product and Customer Officer at GridGain. “With each release of our platform, we strengthen our ability to exceed the high expectations of our enterprise customers in supporting the increasing complexity of modern data use cases such as traditional and generative AI, fraud detection, smart decisioning, operational analytics, and customer 360°. With version 8.9, GridGain continues to raise the bar for durable, ultra-fast data processing and analytics at scale.”

Manta SaaS Brings Powerful Data Visibility and Mapping to The Cloud 

Manta, a leader in data lineage and metadata management, now offers Manta SaaS, giving customers the flexibility to take advantage of its best-in-class code-level data lineage in a hosted cloud environment. With Release 42 of its platform, Manta has innovated to meet evolving customer needs — from on-prem and cloud options and intuitive automation to enhanced customization and greater value for both data engineers and architects as well as analysts and business users. 

Poor data quality costs businesses an average of $12.9 million annually, according to Gartner. Data lineage prevents bad data from creating long-term issues by tracking and identifying where data changes and flows may encounter problems. Release 42 goes a step further to create a simpler user experience, enhancing automated processes and intelligence capabilities for better workflows and prescriptive capabilities. At the same time, the update increases the number of technologies for which the platform provides code-level scanning, including Databricks.

“Manta’s new cloud capabilities, advanced customization features and top-level visualizations make our platform more nimble than ever,” said Tomas Kratky, founder and CEO of Manta. “This release supports our ongoing goal to meet customers’ changing needs and offer greater cross-functional business value, no matter where they are in their data lineage journey.” 

insightsoftware Introduces Logi Symphony Embedded Business Intelligence and Analytics for Any Web-Based Application

insightsoftware, a global provider of reporting, analytics, and performance management solutions, is celebrating the release of extensive new capabilities to the Logi Symphony software suite to help companies quickly make data-rich application experiences that keep their customers engaged. Logi Symphony now easily embeds self-service, end-to-end business intelligence and analytics (ABI) fused with artificial intelligence (AI) into any web-based application. This evolution marks the integration of acquired embedded analytics solutions at insightsoftware, with the best features of previously acquired products combined under the company’s own single self-service BI solution. 

Logi Symphony provides seamless integration, flexible customization, and composability with a rich user experience. It also enables ISVs and enterprise application teams to launch products faster, drive end-user data-driven decision intelligence, improve customer retention and engagement with in-app contextual analytics, and monetize data through self-service analytics. This ensures that any reporting is powered by clean, accurate, comprehensive data that generates more impactful and timely insights. Organizations can better monetize data resources while empowering users to make more informed decisions, with highly customizable managed dashboards, reports, and visual data discovery content available directly within applications.

“Analytics are an essential component of any modern application, but embedding this functionality is not easy. It often leads to a poor user experience that frustrates end-users and hinders their decision-making,” said Lee An Schommer, Chief Product Officer at insightsoftware. “That’s why we created Logi Symphony, purpose-built software that empowers ISVs and application teams to embed analytics and visualizations into their application. This means organizations can ensure that their applications and users are operating at their fullest potential, driving improved operations and bottom-line impact.”

Nyriad Unveils Storage-as-a-Service (STaaS) Offering that Provides Game-Changing Simplicity and Flexibility

Nyriad® Inc., a pioneer of the GPU-accelerated storage system delivering excellent performance, cost-effectiveness and resilience, announced the launch of UltraIOTM-as-a-Service, an on-premise Storage-as-a-Service (STaaS) offering tailored to meet the ever-increasing data management demands of modern enterprises.

Enterprises today face a myriad of hurdles, including capital budget constraints, rapid and unpredictable data growth, gaps in IT talent, operational complexity and ever-stringent sustainability mandates. Nyriad’s UltraIO-as-a-Service offering comprehensively addresses these challenges with flexible, simple-to-use STaaS options.

Nyriad’s UltraIO-as-a-Service user experience is defined by its simplicity, requiring only three key decisions: Contract Term, Data Services and Reserve Commitment. From there, Nyriad and its value-added reseller partners handle implementation and ongoing 24/7/365 proactive monitoring, alerting and customer support.

“We spent considerable time listening to our customers and partners to understand which features of existing storage-as-a-service offerings deliver real value to their customers,” said Andrew Russell, Chief Revenue Officer, Nyriad, Inc. “Armed with that intelligence, we are confident UltraIO-as-a-Service positions our partners to offer their end customers an easy-to-understand, easy-to-deploy, and easy-to-consume storage solution that meets their technology, business, and budgetary requirements.” 

Alation Announces ALLIE AI as a Co-Pilot for AI Engineers, Data Analysts, and Data Stewards to Accelerate Time-To-Value

Alation, Inc., the data intelligence company, announced ALLIE AI, a co-pilot to increase the productivity of AI engineers, data analysts, and data stewards. ALLIE AI builds upon Alation’s machine learning (ML) capabilities to enable organizations to save time and scale data programs more quickly, democratize trusted data, and advance new AI initiatives.

Organizations are racing to ensure enterprise data readiness for critical business initiatives, such as implementing AI-powered strategies through building and training large language models (LLMs). Yet, the volume and breadth of data generated by the explosion of SaaS applications and new personas – such as AI engineers – that need access to that data has held back the delivery of such initiatives. For example, in a recent survey by IDC, organizations responded that the most significant challenge to maximizing the value of their AI/ML initiatives was issues with data availability and data quality

ALLIE AI automates the documentation and curation of data assets at scale, making it easy for analysts to find that data. ALLIE AI’s intelligent curation capabilities enable organizations to accelerate the population of their data catalog by automatically documenting new data assets and suggesting appropriate data stewards. In addition, ALLIE AI intelligent search and SQL auto-generation enable analysts to find the data they need without needing specialist data analyst skills. ALLIE AI can help analysts easily find customer data without knowing the names of the underlying datasets or how to write SQL. This allows analysts to build AI models and enable data-driven decision-making more quickly without needing a specialized data analyst skillset.

“A data-driven organization can quickly address strategic questions. However, as data volumes grow, locating, understanding, and trusting the data becomes increasingly challenging,” said Junaid Saiyed, CTO at Alation. “This challenge becomes more pronounced when businesses invest in data initiatives like generative AI. Such projects demand extensive data stores to operate as intended. Alation has tackled the ‘4V’ problem – volume, velocity, variety, and veracity – for over a decade by harnessing ML and AI to support comprehension and confident usage. With ALLIE AI integrated into our data intelligence platform, data teams can expedite the discovery of relevant data, gain insights into the lineage of their AI models, and effectively manage business outcomes on a larger scale.”

pgEdge Platform, the First Fully Distributed Edge Database Based on Standard PostgreSQL, is Now Generally Available

pgEdge, Inc. announced general availability of pgEdge Platform, the fully open and fully distributed PostgreSQL database designed to run at or near the network edge and between cloud regions. With this milestone release – and after a seven month beta period – pgEdge will now provide support for customers moving applications deployed on pgEdge Platform into production. pgEdge Platform packages pgEdge Distributed PostgreSQL as downloadable software that can be self-hosted and self-managed in either on-premises environments and/or in the cloud with major providers such as AWS, Microsoft Azure and Google Cloud Platform.

“We are excited to announce that pgEdge Platform is now generally available,” said Phillip Merrick, Co-founder and CEO of pgEdge. “With these new features and enhancements, we are pleased to be able to support customers taking their distributed Postgres applications into production.”

Crowdbotics Unveils New AI Capabilities to Accelerate Application Planning and Development

Crowdbotics, a leader in helping organizations harness the power of CodeOps, unveiled new AI capabilities in the Crowdbotics platform to help organizations plan and launch products in a fraction of the time. These new AI capabilities enable teams to create organization-level strategies for systemic code reuse, generate individual application build plans in seconds, and instantly assemble a running application from prebuilt parts.

“We are in the middle of a paradigm shift for product creators with Generative AI. Adapting to this new world requires us to redefine the software development lifecycle to be AI-first in planning, verifying, and launching software,” said Anand Kulkarni, founder and CEO, Crowdbotics. “The Crowdbotics platform delivers an efficient process to enable systemic code reuse within your organization using AI.” 

Iris.ai Unveils Researcher Workspace 1.0, an Accuracy-First AI Assistant for Scientific Researchers

Iris.ai, provider of a leading and award-winning AI engine for scientific text understanding, released the Researcher Workspace 1.0, a new platform that uses Natural Language Processing (NLP) to accelerate scientific research, putting factual accuracy first. Researchers can access a comprehensive AI tool suite powered by Iris.ai’s unique technology, including a brand new Chat feature, to search, explore, and extract information from swathes of scientific documents all in one place without compromising privacy. The Researcher Workspace’s uniquely comprehensive tool suite includes the Chat feature, content-based search, context and data filtering, data extraction from text and tables, document set analysis, and summarisation.

Given the pressing need for accuracy in scientific research, as well as the well-publicized hallucinations of many Large Language Models (LLMs), Iris.ai has developed a method to measure the factual accuracy of AI-generated content – testing precision and recall, fact tracing, and extraction. Using knowledge graphs and a universal metric named WISDM, designed by the company, Iris.ai tests context similarity to confirm factual accuracy. 

Anita Schjøll Abildgaard, CEO and Co-founder of Iris.ai, commented, “The Researcher Workspace 1.0 is the product of seven years of research and development, based on the latest in AI, machine learning, and NLP technology. This is a game-changer for scientific researchers, and we’re grateful to our clients and customers who’ve been invaluable in our journey along the way. The Researcher Workspace 1.0’s ability to help researchers navigate, evaluate, and draw insights from thousands of research documents  in hours, not months, can accelerate the pace of discovery and innovation and help us tackle some of the biggest problems of our age: our clients are working on topics like climate change, plastic pollution, human health, and the spread of disease. We are determined to build on our selection for the European Innovation Council Accelerator funding, using our proprietary techniques, developed through years of research and development, to deliver better results for our clients at more affordable costs. With this release, we are one step closer to our goal of building a complete AI researcher – AI tools and applications which allow humans to make sense of the totality of the world’s scientific knowledge.”

Teradata Helps Customers Accelerate AI-led Initiatives with New ModelOps Capabilities in ClearScape Analytics

Teradata (NYSE: TDC) announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organizations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimize AI/ML deployments to quickly generate business value from their AI investments.

ModelOps in ClearScape Analytics makes it easier than ever to operationalize AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well.

“We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value,” said Hillary Ashton, Chief Product Officer at Teradata. “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organizations to cost effectively operationalize and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.”

Loft Labs Launches vCluster.Pro For Enterprise-Grade Virtual Kubernetes

Loft Labs, the Platform Engineering Platform, launched vCluster.Pro, the commercial version of its open-source project vCluster, which became the de facto industry standard for virtual Kubernetes clusters. Loft Labs created the vCluster project in 2021 and as of today, engineers have created more than 40 million virtual clusters using vCluster. With the new enterprise edition called vCluster.Pro, Loft Labs meets growing demand from end-users to run vCluster at enterprise-scale and with rock-solid security controls.

“Kubernetes is the technical backbone of thousands of enterprises, including the most cutting-edge companies in the world. It has the power to accelerate innovation and deliver scalability and operational stability. vCluster has helped thousands of software engineers unlock the benefits of Kubernetes faster and at lower cost, but that was just the beginning,” said Lukas Gentele, CEO of Loft Labs. “With vCluster.Pro, we’re helping our most advanced customers run virtual Kubernetes clusters at an unprecedented scale and with rock-solid enterprise-grade security and performance enhancements. vCluster.Pro enables our customers to build and manage compute-intensive, industry-defining products and services on virtual Kubernetes clusters.”

Hitachi Vantara Unveils Hitachi Virtual Storage Platform One, Signifying a New Hybrid Cloud Approach to Data Storage

Addressing the critical challenges facing IT leaders, many of whom are striving to scale data and modernize applications across complex, distributed hybrid and multicloud infrastructure, Hitachi Vantara, the modern infrastructure, data management, and digital solutions subsidiary of Hitachi, Ltd. (TSE: 6501), announced the transformation of its existing data storage portfolio with the introduction of Hitachi Virtual Storage Platform One a single hybrid cloud data platform. Having a common data plane across structured and unstructured data in block, file, and object storage allows businesses to run different types of applications anywhere — on-premises and in the public cloud, without the complexities many are faced with today.

“Hitachi Virtual Storage Platform One marks a significant milestone with our infrastructure strategy. With a consistent data platform, we will provide businesses with the reliability and flexibility to manage their data across various storage environments without compromise,” said Dan McConnell, senior vice president, product management for storage and data infrastructure, Hitachi Vantara. “The design, development, and construction of Virtual Storage Platform One, with a focus on reliability, security, and sustainability, further enhances the impact for our customers.”

Azion Offers Enhanced Observability Through New Real-Time Metrics Features 

Azion, a leader in edge computing, announced the availability of its new Real-Time Metrics features through its Observe product suite. The upgraded features provide seamless insights with powerful real-time visualization into application performance, availability and security for better observability within organizations. 

Observability is crucial for diagnosing problems, optimizing system performance and ensuring the reliability and availability of complex software applications and distributed systems. As observability enables teams to gain critical insights into what is happening within a system, it’s essential that they have the tools to monitor performance effectively. With the ability to run complex, faster queries in one click, Real-Time Metrics provides new, advanced features and filters that allow for robust data analysis and more detailed context on graphs, all on an enhanced interface design, for better user experience. 

“Observability isn’t just about seeing what’s happening within businesses; it’s about ensuring systems stay resilient. It’s one of our key pillars, and we have worked hard to offer our customers the best solution available so they can make observability a priority,” said Rafael Umann, CEO of Azion. “Our upgraded Real-Time Metrics functionality will allow users to have a complete view into their applications, which will ensure they have the context to make smarter and more informed business decisions.” 

Wiiisdom Bolsters AnalyticsOps Portfolio, Introducing Wiiisdom Ops for Power BI 

Wiiisdom, the pioneer in AnalyticsOps, announced Wiiisdom Ops for Power BI, a new governance offering designed to deliver trusted data and analytics at scale. This SaaS-based solution, part of the AnalyticsOps portfolio from Wiiisdom, unlocks and automates new testing capabilities and integrated BI content management workflows for Microsoft Power BI.  

“With more than 15 years of experience solving the toughest business intelligence challenges, we have a unique understanding of the problems that today’s data leaders face,” said Sebastien Goiffon, Wiiisdom Founder and CEO. “The launch of Wiiisdom Ops for Power BI is another step in our mission to minimize risk and increase trust in an organization’s data so business leaders across the globe can confidently make data-driven decisions.”

Sisense Unveils Public Preview of Compose SDK for Fusion, Empowering Developers to Build Insights-Driven Products

Sisense, an analytics platform provider that empowers thousands of companies to build and embed analytics into data products, unveiled the public preview of Compose SDK for Fusion. Compose SDK for Fusion is a flexible development toolkit that gives developers and product leaders tools to embed context-aware analytics in a code-first, scalable, and modular way, which accelerates the development process, reduces maintenance overhead, and saves valuable time compared to coding analytics from scratch.

“Composability is not just a buzzword; it’s the frontier of modern application development,” says Ariel Katz, CEO at Sisense. “We believe that this new era of composable modern apps is fundamentally changing how applications are developed and delivered. Agile and modular, these apps seamlessly integrate insights data and operations at their core. At Sisense, we’ve been a leader in embedded analytics, and Gartner has recognized us as Magic Quadrant visionaries for the past six years. Now, we’re evolving our battle-proven analytics platform to be developer-first, enabling true data insights app composition, all while maintaining our existing low and no-code capabilities. We’re committed to this evolutionary journey and plan rapid advancements in our offerings in the coming months. To ensure we meet the needs of a diverse range of analytics builders, we’re prepared to support the full spectrum—from no-code enthusiasts to seasoned developers, from bespoke internal apps to the most demanding customer-facing apps—thereby maintaining our leadership in the ever-changing landscape of embedded analytics within modern apps.”

Vultr Launches GPU Stack and Container Registry for AI Model Acceleration Worldwide

Vultr, the privately-held cloud computing platform, announced the launch of the Vultr GPU Stack and Container Registry to enable global enterprises and digital startups alike to build, test and operationalize artificial intelligence (AI) models at scale — across any region on the globe. The GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models globally available for on-demand provisioning, development, training, tuning and inference. Available across Vultr’s 32+ cloud data center locations, across all six continents, the new Vultr GPU Stack and Container Registry accelerate speed, collaboration and the development and deployment of AI and machine learning (ML) models.

“At Vultr we are committed to enabling the innovation ecosystems — from Miami to São Paulo to Tel Aviv, Europe and beyond — giving them instant access to high-performance computing resources to accelerate AI and cloud-native innovation,” said J.J. Kardwell, CEO of Constant. “By working closely with NVIDIA and our growing ecosystem of technology partners, we are removing barriers to the latest technologies and offering enterprises a composable, full-stack solution for end-to-end AI application lifecycle management. This capability enables data science and engineering teams to build on their global teams’ models without having to worry about security, local compliance or data sovereignty requirements.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Comments

  1. Informative update! InsideBigData’s latest news covers diverse topics, offering valuable insights into the evolving landscape. Appreciate the coverage!