Heard on the Street – 2/22/2024

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

OpenAI Account Shutdown. Commentary by Michael Rinehart, VP of AI at Securiti

“In response to the recent news about OpenAI terminating accounts linked to state-affiliated hacking groups, there is a growing concern about the potential misuse of AI, particularly Large Language Models (LLMs), for cyberattacks. Relying solely on the inherent defenses of LLMs is insufficient for safeguarding against cyber threats, highlighting the pressing need for additional layers of protection to mitigate the risks posed by AI-powered attacks.

This entails a shift towards a two-tier security approach. Firstly, organizations should adopt application-specific models tailored for specific tasks, possibly supplemented by knowledge bases. These models provide high value for use cases such as Q&A systems. Secondly, an advanced monitoring system should be implemented to scrutinize access to and communications with these models for privacy and security issues. 

This layered approach provides significant flexibility and improved alignment with governance and data protection principles. It also allows organizations to leverage both traditional and cutting-edge security techniques for LLMs to mitigate the risks associated with Generative AI.

University of Cambridge AI hardware regulation proposal. Commentary by Victor Botev, CTO of Iris.ai

“While we agree with the University of Cambridge’s proposals on AI hardware governance, it’s important to recognise that solely focussing on chips and servers is not the answer. There are obvious practical reasons for doing so given the physical nature and small number of supply chains, but there are other alternatives to the hardware always playing catch up with the software.

We need to ask ourselves if bigger is always better. In the race for ever bigger large language models (LLMs), let’s not forget the often more functional domain-specific smaller language models that already have practical applications in key areas of the economy. Fewer parameters equals less compute power, meaning there’s more compute resource available to benefit society.”

Big opportunities with AI and LLM but approach with caution. Commentary by Dan Hopkins, VP of Engineering at STACKHAWK

“Artificial intelligence (AI) and large language models (LLM) dominated the industry conversation in 2023. And rightfully so. Many remain excited about the significant enhancements to productivity they may bring, a huge win for organizations struggling with limited resources during these tumultuous economic times. However, it’s important that we approach these new technologies with caution. While they have the potential to augment business operations, attackers will also be leveraging these tools to execute attacks on organizations and their employees. It is possible that we’ll see a rise in attacks on humans in the new year, particular phishing scams.”

Practical Steps Towards Responsible AI: Industry-Wide Perspectives. Commentary by Mikael Munck, CEO and Founder, 2021.AI

“The tech industry is increasingly prioritizing the ethical and responsible use of artificial intelligence (AI) and large language models (LLMs). Addressing this need involves a shared responsibility model, where technology providers and organizations work together to ensure AI and LLMs are used wisely, especially in regulated sectors.

This collaborative approach is essential in areas like finance and healthcare, where compliance with regulations is critical. By adopting governance frameworks, organizations can manage how data is used, handle sensitive information securely, monitor activities, and report to stakeholders, ensuring AI is used with care and compliance.

The goal is to establish a clear and practical framework for using AI responsibly, aligning with regulatory standards, and ensuring AI’s benefits contribute positively to society and business.”

2024 election threats. Commentary by Ram Ramamoorthy, Head of AI Research at ManageEngine

“As everything around elections increasingly plays out in the digital world, businesses must be aware of the pivotal role Artificial Intelligence (AI) plays in maintaining the integrity of their operations during these democratic processes. The rise of digital platforms has not only transformed political campaigning but also presented new challenges for businesses, particularly concerning cybersecurity and misinformation. In this environment, AI emerges as both a tool and a challenge for businesses.

The onset of election campaigns marks a surge in digital information exchange, where distinguishing between factual content and misinformation becomes crucial. For businesses, this landscape poses a risk in terms of brand reputation and the spread of false information that could affect consumer perceptions and market stability. AI technologies are essential in monitoring and analyzing online content to identify potential misinformation that could impact business operations or corporate image. By employing advanced machine learning algorithms, businesses can proactively manage their digital footprint and mitigate the risks associated with misinformation.

Moreover, the heightened digital activity surrounding elections amplifies cybersecurity risks. Businesses, especially those providing digital services or platforms, may be inadvertently caught in the crossfire of cyber threats targeted at election processes. AI-driven cybersecurity solutions become indispensable in such scenarios, offering the ability to detect, analyze and respond to cyber threats in real-time. This includes protecting sensitive data and infrastructure, securing communication channels, and ensuring the integrity of digital transactions.

However, the deployment of AI in business contexts, especially during politically charged periods, must be handled with care. Businesses must strive to develop and deploy AI solutions that are transparent, accountable, and aligned with ethical standards, ensuring that their use of AI does not compromise customer trust or infringe upon individual rights.

In conclusion, as digital engagement intensifies during election campaigns, businesses must be vigilant about the dual challenges of misinformation and cybersecurity. AI offers powerful tools to address these challenges, but it also necessitates a responsible approach in its application. The imperative for businesses is clear – to harness AI effectively while upholding ethical standards and maintaining public trust, thereby contributing positively to the integrity of the digital landscape during election times.”

AI’s Role in Empowering Procurement to Drive Organizational Insights. Commentary by Stephany Lapierre, CEO and founder of TealBook

“Since 2020, extended supply chain disruptions, geopolitical events, advancing technologies and economic instability have indelibly changed how leaders conduct business. Now, uncertainty is the only certainty for modern organizations. However, expecting the unexpected is impossible, at least for organizations making strategic and financial decisions based on poor-quality data (or no data at all).

The critical nature of data-driven decision-making has been hammered home in several strategic departments. For example, Gen AI and artificial general intelligence (AGI) have revolutionized customer support by providing consumers with routine service and marketing leaders with back-end analytics. By identifying customer pain points, these technologies permit leaders to create a more advantageous customer experience. And in the finance department, good data hygiene enables leaders to automate critical but rote processes like fraud checks and document processing, freeing time for more strategic initiatives.

Yet procurement, which represents 30-50% of the average organization’s revenue, has yet to receive a data overhaul. Many leaders continue to rely on outdated methods of supplier data management, including manual processes and spreadsheets. Without good data, procurement leaders cannot understand a supplier’s risk blueprint, including their downstream suppliers, diversity certifications and manufacturing practices. Without this visibility, leaders will likely fall out of compliance and incur harsh penalties.

To improve overall operational efficiency, procurement teams need high-quality data. A trusted supplier data foundation enables data normalization, improves spend analytics, enhances decision-making and unearths critical cost-savings opportunities. By maintaining access to a routinely automated source of supplier data, leaders can understand real-time changes in their supply chain, allowing them to pivot to better suppliers, avoid fines and maximize market opportunities. These strategic advantages create a more efficient procurement department and have significant implications for an organization’s profitability.” 

AI is the next frontier of meaningful work. Commentary by Alexey Korotich, VP of Product, Wrike

“The business demand for AI has exploded as organizations seek new avenues of growth. In 2024, AI will continue to be a top priority for businesses as many organizations develop a deeper understanding of how to fuel efficiency and smarter working by relying on the technology. From go-to-market initiatives to product innovation, AI has fundamentally changed the way we work, and we’ve begun to see its impact beyond automating mundane tasks. While we can’t understand the long-term impact of Gen AI on our workforce entirely, it’s clear that investing in the technology will make way for new careers as the need for AI skills in the workforce remains a top priority. For example, business leaders can expect a rise in citizen developers, who will help to bridge the gap between the needs of business users and constraints of rigid line of business applications, and allow people to craft workflows in their natural language without developing code. This will eventually make software development more accessible, flexible and scalable than ever before. 

And as AI plays a bigger role in their existing workflows, teams must continue to think strategically about where it can have the most impact. For example, increased access to data-driven insights can help teams supercharge their work, enabling them to make smarter decisions about how to prioritize projects and contribute in more impactful ways to their organizations. AI will also unlock new opportunities for creative thinking by removing time spent on looking for information and performing data analysis. As a result, business goals will become more attainable because employees can fully utilize their skill sets and spend time on higher-quality and higher-value work that matters. Beyond reducing time and costs, it will also reduce delays, maximize resources, and help teams deliver projects on time. This relationship between data and AI transcends across many industries from high tech, finance, manufacturing, marketing and others and I expect industry leaders to increase spending on research and implementation as AI increasingly becomes a collaborative tool for the future.”

AI Adoption – It’s all about user buy-in. Commentary by Chris Heard, CEO of Olive Technologies

“The rapid development of generative artificial intelligence (GenAI) presents a significant opportunity for enterprise organizations to enhance their operational efficiency and productivity. However, maximizing its benefits requires a proactive approach, starting with identifying suitable business use cases. Key departments with substantial potential for GenAI implementation include data analysis, financial reporting and goal setting, all of which involve hefty data processing and document generation. Proactively connecting with industry-leading developers and suppliers like Microsoft, Amazon and Google — and sharing example use cases — could help organizations shape the development of GenAI to align with their specific needs and avoid generic solutions that don’t fully optimize workflows.

Initiating engagement with GenAI technologies now empowers organizations to cultivate their maturity and paves the way for significant productivity gains in the future. Enterprises that prioritize identifying potential use cases and actively facilitating user adoption position themselves to capitalize on the transformative potential of AI-driven business operations. Equipping individuals with cutting-edge GenAI tools at the pilot stage and fostering collaboration to ensure seamless integration within those established workflows is crucial. While widespread production deployment might not yet be feasible, gathering early user feedback is essential for refining GenAI tools and ensuring they deliver tangible value as technology matures.”

Unlocking the Potential of LLM in Healthcare. Commentary by David Lareau, CEO, Medicomp Systems

“Integrating Large Language Models (LLMs) into healthcare holds great promise for transforming patient care. While the journey is fraught with ethical and practical hurdles, there lies a singular, actionable solution within our grasp: effectively managing the burgeoning and often overwhelming volume of healthcare data. This singular focus on data management is the key to unlocking LLMs’ potential, making healthcare data more manageable, insightful, and secure.

The cornerstone of this approach is the recognition that the heart of healthcare innovation lies not just in advanced algorithms or computing power, but in our ability to sift through, make sense of, and securely harness the vast seas of data that the sector generates. LLMs can scour immense amounts of data, intelligently filtering, analyzing, prioritizing to distill clinically actionable insights. This not only streamlines the decision-making process for healthcare professionals but also significantly reduces the risk of information overload—a critical factor in ensuring timely and accurate patient care.

By automating data analysis, LLMs improve privacy and security, and with proper training, they can mitigate biases, ensuring equitable care.

Focusing on data also simplifies LLMs, making their insights more transparent and understandable for healthcare providers. This builds trust in the technology and integrates it seamlessly into clinical settings. Moreover, data-centric LLMs ensure regulatory compliance, embedding privacy and security in their processes and aligning with laws like HIPAA.

Innovations like cloud-based Clinical Quality Measures (CQM) and Hierarchical Condition Categories (HCC) services demonstrate the practical application of LLMs in healthcare. These tools leverage AI to extract clinically relevant information from vast datasets, improving patient safety and care quality while addressing privacy, bias, and transparency challenges.

By concentrating on data management, LLMs can significantly contribute to healthcare, augmenting human expertise and ensuring better outcomes while prioritizing patient well-being. This approach not only addresses the ethical and practical challenges but also capitalizes on the strengths of LLMs to transform healthcare for the better.”

Thinking of adoption Gen AI? Check your data governance first. Commentary by Patrick Zerbib, Partner at Mazars in its Data Advisory Services practice

“Countless organizations are looking to embark on the Generative AI journey and reap the benefits of integrating it into their workflows. While this revolutionary technology has transformative potential, successful companies first make sure their AI strategy is well aligned with the overall business strategy and have already implemented a robust data governance framework. 

An important first step is for business leaders to thoroughly set expectations for the value Generative AI could generate towards reaching broader organizational goals. What is important is to keep the integration consistent and relevant. Without this alignment, business leaders may find themselves implementing solutions that may not help with overall business objectives.

Next, business leaders should assess their data readiness, which is a critical component for managing data flows effectively. The foundation of a well-designed and efficient framework can help to safeguard data quality and accuracy standards, improve process consistency, document and streamline data flows and help improve overall risk management. Said differently, failing to create a robust data governance framework could threaten overall data quality and accessibility standards, and in turn potentially jeopardize the reliability and effectiveness of Generative AI deployments. With robust data frameworks in place, organizational leaders can more effectively address potential issues before implementing Generative AI.

The lack of a strong data framework increases the risk for potential legal and reputational consequences due to non-compliance with data regulations. To best navigate the regulatory landscape surrounding data usage and privacy, organizations must adopt stringent processes to ensure data integrity, security and compliance.

As emerging technologies continue to revolutionize traditional workflows, the implementation of an adequate data governance framework is even more imperative. This is an important, and often overlooked, condition to unlocking Generative AI’s full potential and to protect the organization from misusing the technology and exposing itself to potential legal and reputational risks.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW