Pandora’s Box or Lockbox? The Top 3 Barriers to Implementing Generative AI

Print Friendly, PDF & Email

The Existential Threat

Months ago a seminal event occurred in industries around the globe that is causing disruption and displacement. Aspirants are positioning themselves to be leaders and unsuspecting dominators are scurrying to catch up so they won’t be left behind. The event was merely an answer to an urgent question that companies gave or failed to give: how will we use AI to our competitive advantage?

The Progressives

A friend of mine launched LLMs to monitor regulatory changes so he can be the first to be compliant; afterall, banking CDOs can be imprisoned for data breaches. Medical Officers use AI image recognition to expose conditions undetectable to the human eye and to guide real time surgical decisions for brain tumors. Using satellite imagery, insurers employ AI to estimate the relative devastation for victims of natural disasters and issue ACH payments without ever visiting the homes. 

Those that were already implementing AI tech prior to the explosion of generative AI have an advantage over recent entrants who have an advantage over those still trying to determine how they will respond. Those just joining the revolution must quickly understand and overcome the barriers, some of which are organizational, others are technical.

Barrier#1 – The Lockbox

Generative AI was built for the cloud but the most restricted data for many companies, especially those in regulated industries, remains safely on-prem under lock and key. Therein lies the conundrum. Context is critical for language models to be effective but many CDAOs justifiably fear exposing private data, their most valuable asset, to train models in the cloud. Even if privacy could be assured, there would still be trepidation that the data might be inferred from the models’ output.

Without foundational data as critical context, companies will only be training models which know little about them and thereby do little for them. Instead of game-changing competitive advantage, the models will only be capable of achieving efficiency.

Not a Solution for Most Companies: Spend millions of dollars and a couple years to build your own LLMs inside the lockbox.

A Solution: Focus on machine learning algorithms to solve predictive and prescriptive challenges. Safely train the models inside the lockbox and use the outputs to make sound decisions and gain competitive advantage. This solution facilitates AI quickish wins while the generative AI marketplace matures to provide industry specific language models for execution inside the lockbox.

Barrier#2 – The Data

(Data Availability, Data Governance & Data Quality)

If your data is already highly secure, how available is it to generate strategic business value? Is it integrated across all your environments? Is it governed, meaning that you have control of it and that it is reliable to generate insights? Have you standardized your data assets to promote common interpretation? If data is fragmented, if data is ungoverned, if the multiplicity of non-standard data assets lends to variable interpretations, you’ll be training AI models to be just another opinion, an indefensible minority report. One CDO 2+ years into a generative AI journey rightly quipped that AI doesn’t do magic.

A Solution: The good news is that modern data platforms can help you overcome this barrier very effectively. The bad news is that the people and process components to achieve data governance and data quality take time and effort. It’s a multiyear journey. Hopefully you’re already on your way.

Barrier#3 – The AI-Driven Culture

CDAOs love to talk about data-driven culture. Providing data and analytical insights that impact the top line and bottom line of the company is challenging in and of itself, but data-driven enculturation is far more challenging, and a Generative AI culture would not only be exponentially more difficult to achieve, but necessarily more expedient.

Here’s what I mean. The connotation of data-driven culture is that the analysis of data for making decisions becomes an integral part of mission-critical workflows throughout the enterprise, but Generative AI doesn’t merely aid decision-making, it makes decisions. It creates. And that means that the culture won’t just need to understand the data to make sound decisions, it will need to be able to question the veracity of the decisions that models make. To do so, leaders will need to understand the tech and the models themselves, an education that data technicians will exchange for intimate involvement in the vetting and selection of the most suitable business processes to be automated using Gen AI.

A Solution: Continue driving toward your data-driven cultural aspirations through steady  improvements in data literacy. Make them very effective decision makers through your analytics products such that they become dependent on them for success. Elevate the mindset of your more highly data-driven, data-savvy business leaders and units. Invite them into your POCs to explore and validate the outputs of machine learning algorithms. 

None of this will be easy. Revolutions rarely are.

About the Author

Shayde Christian, Chief Data & Analytics Officer at Cloudera. Shayde guides data-driven cultural change for Cloudera to generate maximum value from data. He enables Cloudera customers to get the absolute best from their Cloudera products such that they can generate high value use cases for competitive advantage. Previously a principal consultant, Shayde formulated data strategy for Fortune 500 clients and designed, constructed, or turned around failing enterprise information management organizations. Shayde enjoys laughter and is often the cause of it.

Sign up for the free insideAI News newsletter.

Join us on Twitter:

Join us on LinkedIn:

Join us on Facebook: NewsNOW

Speak Your Mind