Heard on the Street – 4/25/2024

Print Friendly, PDF & Email

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Mitigating AI Hallucinations: Ensuring Trustworthy Chatbot Interactions. Commentary by Donny White, CEO and Co-Founder of Satisfi Labs 

“Training a new employee, especially in the context of customer service or management, typically includes educating the person on what they need to know and how they need to communicate that knowledge. With the infusion of these new AI technologies, it has become much easier to acquire vast amounts of knowledge to train a conversational AI agent. However, the second part is much more difficult. Employers and brands that distribute AI agents want consistency and predictability of how they will behave in certain situations. 

Yet, with generative AI, you see a lot of use cases where the opposite is the actual result. Answers can be fabricated, termed as ‘hallucinations’. Imagine addressing a mishandling of a customer query only to hear the employee response was, “I know the right answer was in my training, I gave the wrong answer because I hallucinated.” It is unlikely this person would remain an employee. 

It is important to differentiate between various AI responses and applying the correct technology to them is essential. If the goal is consistency, trustworthiness, and accuracy, then generative tools need to be used in certain use cases and must be combined into a broader infrastructure to meet expectations effectively. In policy or FAQ circumstances, utilizing these tools within well-defined datasets provides the advantages of the technology while minimizing potential errors. Conversely, for sales-focused conversations, traditional guided flows offer a streamlined solution where the goal is to funnel and transact rather than read through the content and generate responses. 

The consequences of one negative experience can lead to losing a customer, developing a distrust of the technology for future use, and tarnishing brand association. Today, many companies have a goal of deploying AI to reduce costs or enhance experiences. However, these projects have the highest potential for hallucinations because it’s a tool thrown into a project, rather than a project that will leverage a tool. By treating AI technology integration the same way we approach hiring employees, the use case selection and success metrics will be aligned with the desired results.”

Human-in-the-Loop Training Methods Will Become Increasingly Popular. Commentary by Peter Stone, PhD, Executive Director, Sony AI America

Recently, we have seen developments in generative AI, generative adversarial networks, and diffusion models. However, I think one of the things that is also happening is the growing recognition of human-in-the-loop training methods and the possibilities they afford. These methods are different because it’s not just simply about the computer training itself from large quantities of data. Human-in-the-loop provides the computer with the opportunity to learn from human feedback and input that are given through demonstrations; evaluations, where a human indicates whether the computer did a good job or a bad job; and interventions, where a person watches how the program is doing and gives corrective actions when it does something that the person doesn’t want it to do. In 2024 and the next few years ahead, the class of human-in-the-loop training methods will become increasingly mature and more widely used, opening up a plethora of new possibilities for artificial intelligence.”  

The Use of AI in the Healthcare Industry. Commentary by Gary Shorter, Head of AI and Data Science at IQVIA 

“Though AI is utilized in limited ways across the field at the moment, we may only be at the precipice of understanding its true impact on the healthcare industry. AI has the potential to improve clinical trials and medical diagnoses, reduce administrative burden, advance the development of medical devices, improve disease prevention and advance telemedicine and remote healthcare. The opportunities for improved automation and analysis are bountiful in this industry. 

At the same time, AI will likely bring about a new wave of regulations and compliance mandates to the healthcare industry. We have already witnessed growing concern regarding the use of this technology by government bodies and federal regulatory agencies worldwide. Specifically, the use of AI in this industry must be trained on unbiased data, maintain human oversight, protect patient data privacy and ensure the safety of healthcare professionals and patients. The success of this innovation will most definitely depend on the ability to safely harness its capabilities.” 

AI has already secured a foothold in healthcare and is currently utilized as a co-pilot in many initiatives. One of AI’s most significant roles today is the automation of data collection and generation of analysis and insights across many life science areas. The use of AI to organize information, eliminate redundancies and provide actionable data is incredibly valuable to the industry. AI not only lays the foundation of data analysis but also provides the ability to take actionable steps following the organization of information.”

How LLM models expanding into multimedia could lead to an explosion of creativity or absolute mayhem. Commentary by Andrew Kirkcaldy, co-founder and CEO of Content Guardian

“Traditional AI models are trained on just text, and while their outputs can be useful, there are limitations. Introducing the ability to input images, audio and videos into AI will result in richer, more nuanced outputs compared to inputting text alone. As they say, a picture is worth a thousand words, but a video is worth a million — and AI trained on more than words can parse through all this context to deliver outstanding results. I anticipate more and more models being trained on multimedia inputs, and this training data will help eradicate previous blind spots — such as the 500 hours worth of video uploaded to YouTube every minute. All that video content will be fair game for training now.

This influx of potential new training data is not without risks. With LLM models accepting video, audio and images as inputs, misinformation could certainly accelerate, especially with the increasing regularity of emulated video (DeepFakes) and audio. On a more positive note, however, this expansion into multimedia could lead to an explosion of creativity…maybe we will get an AI-generated, full feature-length Hollywood blockbuster in the not-too-distant future.”

Insights about the hype around low-code and what it means for the future of the software developer community. Commentary by Kirimgeray Kirimli, President at Flatiron Software

Low-code platforms like Microsoft Power Apps and Power Pages, with the help of Copilot, make it really easy to build web applications and pages. Although convenient, this does make me worry about the job security of software developers in the coming years. That said, at the moment these platforms are not very sophisticated, and any project out of the ordinary will demand a certain level of software understanding and coding expertise.

I have a degree in CS myself, and I took many classes about assembly language. Interestingly enough, I never found myself applying the knowledge gained from them once the course was over. I strongly believe that as AI advances, computer languages will be treated the same. While it’s useful to understand the logic and how it works, specifics will be handled by AI and low-code platforms.

Insights on fine tuning. Commentary by Pavan Belagatti, Technology Evangelist at SingleStore

“Fine tuning is a great opportunity to train and adapt any LLM with your own custom data, so it can effectively retrieve contextually relevant information for any user’s query. The important things to consider when fine-tuning LLMs include: (i) The importance of data quality and diversity: The performance of LLMs heavily relies on the quality and diversity of the training data. High-quality, diverse datasets help in creating models that are less biased and more robust across different contexts and tasks. It’s important to include a wide range of linguistic styles, topics and perspectives to ensure the model can handle a variety of situations and user queries. Hence, spending adequate time and resources on curating and reviewing your training data will result in higher-performing LLMs; (ii) Ethical considerations: During the fine-tuning process, special attention must be given to ethical considerations and AI safety, including the mitigation of biases in the model outputs. This involves both the identification and correction of biases in the training data. Once identified, strategies such as Reinforcement Learning with Human Feedback (RLHF) can be used to mitigate the effects; (iii) Monitoring and evaluation: Continuously monitoring and evaluating LLM responses is a game changer in fine-tuning processes – it gives a clear idea of how the LLM is performing over time and the adjustments needed for better outcomes; and (iv) Experiment with prompts: Crafting clear and concise prompts can significantly influence the quality of the LLM’s outputs. This might sound silly, but it delivers a clear value in understanding how the LLM is generating specific outcomes based on certain prompts.”

Microsoft & OpenAI deal set to avoid EU probe. Commentary by Josh Mesout, Chief Innovation Officer of Civo

“It’s disappointing to see the EU turn away from investigating Microsoft’s partnership with Open AI. As an industry we should be cautious over powerful partnerships as they pose a threat to the entire ecosystem by suffocating competition and innovation. We cannot surrender AI to a virtual monopoly before it has really started.

Maintaining a diverse and competitive landscape is critical, given the far-reaching applications of foundational models across numerous industries. Over-dependence on a handful of major firms could stifle innovation, limit consumer choice, and potentially lead to a monopoly that favours Big Tech.

To keep the market fair and open, regulators should be eyeing these types of partnerships warily. Otherwise, we risk AI following the path of cloud, where hyperscalers run unchecked and leave a broken, locked-in, and stifled market in their wake.”

AI Has the Potential to Bring Cloud Sprawl Back Under Control. Commentary by Sterling Wilson, Product Strategist, Object First

“Talk to most IT administrators about their biggest complaints around their data security posture, and you’ll hear how the cloud rush ultimately led to confusion around where data is being held, when it is being downloaded offline, and where excess storage fees are coming from. However, AI, the latest buzzword in enterprise tech, gives us an opportunity to address this concern and bring runaway cloud applications under control. 

Without putting unnecessary strain on already overworked IT teams, businesses can use AI to analyze cloud performance and find opportunities to reduce redundancies or egress fees, for example. This will allow businesses to be smarter about when and how they leverage the cloud, and ultimately bring up an important question, “When is it applicable to bring data back on premises?” On-prem storage offers many benefits as a supplement to cloud, primarily as a backup use case that segments a separate copy of the data to provide an additional level of security that is faster to recover. As organizations leverage AI to optimize their cloud usage, we’ll start to see further repatriation of data back on premises.”

The Do’s and Don’ts of Adopting AI in HR. Commentary by David Lloyd, Chief Data & AI Officer, Dayforce    

“It’s clear AI is permeating every industry, and the HR space is no different. Gartner notes 76% of HR leaders believe if their organizations don’t embrace AI in the next 1-2 years, they’ll fall behind competitors who do. That’s a lot of pressure. Rushing to implement AI without a plan, however, can lead to missteps and only 9% of organizations have a vision or a plan. This can be hazardous in any industry, but especially within HR because the function deals with employees and their sensitive personnel data. Being thoughtful with AI adoption is critical.  

Before companies begin evaluating the growing plethora of AI-powered human capital management (HCM) software, they must align on a vision and break down what they want to accomplish. This could also mean working closely with a partner that can help lead the way. Is it improving efficiencies and employee experience? Or is it transforming how the company operates? AI can achieve many goals, but the path to experimentation and implementation will look very different. Only a limited few will seek to transform their industries. Do have deep discussions internally and ensure all leaders are on board with the desired outcome.  

Because of the value and volume of employee data HCM software processes, understanding the data governance measures a platform takes is extremely important. Before implementing AI-enabled tools, ask questions of the provider. Is the model self-hosted? How is the privacy and security of worker information ensured? Where is the data fed from or stored? How do you audit the operation of your models? Are third party outside audits performed? What approach was taken to bias check the model? If you’re using generative AI, is your data being ingested as part of the model? Don’t assume guardrails are in place without asking tough questions first.   

Once an organization chooses an AI-driven HCM solution, leaders must determine what functions of HR would most benefit from AI and how they’ll keep humans in the loop. Some areas, like payroll or HR service delivery, require less human oversight. Yet others, like payroll, recruitment and career development, require more. Don’t use a blanketed approach to AI.” 

Informatica founder on Salesforce acquisition. Commentary by Gaurav Dhillon’s, Informatica founder and current CEO of SnapLogic

“I have at best, a mixed feeling about this. On the one hand, I understand why it might be happening and it makes sense in theory. It is a crystal clear “i-told-you-so” moment for me and why I founded SnapLogic, because most enterprises are going to need one platform, for application and data integration. But on the other hand, it will likely create massive turmoil for Informatica (and Mulesoft) customers.

That turmoil is inevitable when two legacy integration platforms like Mulesoft and Informatica have to be squeezed down to one; to truly get a grip on important business data about customers, suppliers and markets. A slow, painful process that will take years if all goes according to plan. This challenge is compounded because you can’t go backward in time to pre-cloud and pre-GenAI technology.

As GenAI plays out and transforms the technology and business landscape in a scant few years, enterprises that make the right innovative choices will gain markets and momentum, and others who buy house brands will continue to fall further and further behind.”

On getting enterprises and government data ready for AI. Commentary by Dan Higgins, Chief Product Officer, Quantexa

“As more enterprises and public sector agencies look to adopt AI, it’s crucial that organizations first look to their data. AI models are only as good as the quality of the data fed into them, so it’s crucial to ensure data is clean, reliable, trusted and unbiased before allowing it to form the basis of any AI-enabled decision making. In an ideal world, organizations would have access to thoroughly vetted, clean, and certified datasets, but the reality is that most are dealing with massive amounts of data from disparate sources, subject to entry errors, mislabeling, format variations, duplication, and other data quality issues. There is also the challenge of inconsistency, where a single entity may be referred to using different names, for example the same person or company referenced in slightly different ways, that makes it difficult to identify specific individuals. 

AI models must be able to recognize and learn from a variety of different data and formats to be helpful. To gain a 360-degree view of individual entities and attributes in a scalable way, it requires more than simply combing through various sources to spot duplicates manually or via simple matching techniques. To address these challenges effectively, organizations need to deploy an advanced approach known as entity resolution.

Entity resolution parses, cleans, and matches data records by using sophisticated technologies (including machine learning, deep learning, NLP, and language models) to infer all the different ways of reliably identifying an entity. It clusters together records relating to each entity; compiles a set of attributes for each entity; and creates a set of labelled links between entities and source records. Using this approach allows organizations to derive insightful context from billions of records quickly and efficiently.

It also provides a foundation to understand meaningful behaviors and networks through advanced techniques such as knowledge graphs. If organizations jump straight into implementing AI without ensuring their data is ready, decisions will be made based on inaccurate or incomplete data that lacks important contextual information.” 

AI Will Change the Nature of Work, Not Eliminate It. Commentary by Chandini Jain, Chief Executive Officer, Auquan

No one doubts that AI is going to affect jobs and the way we work — not just augmenting human effort but, in many cases, outright replacing it. To understand who will be impacted and how, it’s important to recognize the nature of the jobs AI will replace, specifically in terms of generative AI and what we want it to accomplish.

In knowledge-intensive industries, which rely on the expertise, skills and intellectual capital of their workforce, we’re going to see a significant shift away from repetitive, manual data and research work toward higher-level analysis and faster decision-making — all due to generative AI.

As happens in every major platform shift, we’re going to see generative AI lead to more jobs — new, unknown jobs of the future — because the increase in productivity from the newly liberated knowledge workforce will open fresh business opportunities that require enterprises to grow headcount in order to exploit.

Generative AI isn’t coming for positions of creativity and expertise, regardless of the attention that things like image and music generation apps enjoy. Prominent analyst Benedict Evans describes generative AI as “infinite interns.” Think of it this way: If a knowledge-intensive enterprise had access to an infinite number of capable interns, where would they put them to work?

That’s where generative AI will be deployed. Generative AI won’t replace expertise and creative talent, but it will liberate the knowledge workforce from tedious, manual tasks so they can invest more of their effort on high-impact work.

One need not look further than current enterprise AI projects to understand why it will create new and better jobs, not eliminate them. To make generative AI work in the enterprise, it needs good data and good context. That means generative AI solutions need creative and talented people to design and operate them. The same goes for programmers. Generative AI solutions have already been great “co-pilots” for software engineering teams, but without talented programmers and architects to design and operate these solutions, it will be rubbish in/rubbish out.”

AI Edge Deployments Will Demand Greater Mobile Connectivity. Commentary by Allwyn Sequeira, CEO at Highway 9

“The demands of AI far exceed those of the early cloud. AI models are an entirely different beast. They are often composed of millions, sometimes billions, of parameters and require large streams of data in real time—think AI applications like autonomous driving systems or real-time fraud detection. Training models require substantial amounts of experimentation, testing different architectures, parameters, and training data subsets.

To meet these low-latency and data processing requirements, deployment of AI applications will eventually get pushed to the edge—that is at the location where data is produced—not in a central cloud or remote data center. 

This means mobile connectivity—which has not kept up with the rate of technological change—will need to dramatically improve and shift to a cloud-native approach.

A cloud-native model offers resilient connectivity critical for AI and AI-based applications like drones and robotics, which often operate across indoor and outdoor environments. This model ensures uniform connectivity, control, and orchestration across devices, integrating seamlessly with existing enterprise infrastructure and security policies. By streamlining operations and enhancing security, this approach reduces total cost of ownership while paving the way for agile mobile infrastructure to support future AI-driven use cases.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*