AI is ushering in a new era of productivity and innovation, but it’s no secret that there are urgent issues with the reliability of systems such as large language models (LLMs) and other forms of AI-enabled content production. From ubiquitous LLM hallucinations to the lack of transparency around how “black box” machine learning algorithms make predictions and decisions, there are fundamental problems with some of the most widely used AI applications. This hinders AI adoption and generates resistance to the technology.
These problems are particularly acute when it comes to AI content creation. Given the amount of AI-generated content out there that range in quality, there are powerful incentives for companies, educational institutions, and regulators to be capable of identifying it. This has led to a profusion of AI detection tools which are designed to expose everything from AI-generated phishing messages to LLM-produced articles, essays, and even legal briefs. While these tools are improving, AI content generation will never stop evolving.
This means companies can’t afford to sit around and wait for AI detection to catch up — they must take proactive measures to ensure the integrity and transparency of AI-generated content. AI is already integral to a massive amount of content generation, and it will only play a larger role in the years to come. This doesn’t call for a never-ending battle between creation and detection — it calls for a robust set of standards around AI content production.
A new era of AI-generated content
In just the first two months after OpenAI released ChatGPT, it amassed more than 100 million monthly active users, making it the fastest-growing consumer application of all time. In June 2024, nearly 14 percent of top-rated Google search results included AI content — a proportion that is rapidly rising. According to Microsoft, 75 percent of knowledge workers use AI, nearly half of whom started using it less than six months ago. Just under two-thirds of companies are regularly using generative AI, double the proportion that were doing so ten months ago.
Although many employees are concerned about the impact of AI on their jobs, significant proportions say the technology has substantial benefits. Ninety percent of knowledge workers report that AI helps them save time, 85 percent say it enables them to focus on the most important work, and 84 percent say it improves their creativity. These are all indicators that AI will continue to be a major engine of productivity, including for creative tasks such as writing. This is why companies need to develop parameters around AI usage, security, and transparency, which will help them get the most out of the technology without assuming needless risks.
The line between “AI-generated” and “human-generated” content will naturally get blurrier as AI increasingly permeates content creation. Instead of fixating on AI “detection” — which will invariably flag large quantities of high-quality, legitimate content — it’s necessary to focus on transparent training data, human oversight, and reliable attribution.
Problems with AI content generation
Despite the remarkable pace of AI adoption, the technology has a growing trust problem. There have been several well-known cases of AI hallucination, in which LLMs fabricate information and pass it off as authentic — such as when Google’s Bard chatbot (later renamed Gemini) incorrectly asserted that the James Webb Space Telescope captured the first pictures of a planet outside our Solar System and caused Alphabet’s stock price to plummet. Beyond hallucinations and black box algorithms, there are other structural problems with AI that undermine trust.
For example, Amazon Web Services (AWS) researchers recently found that low-quality AI translations constitute a large fraction of total web content in lower resource languages. But the saturation of low-quality content may not be a problem confined to certain languages — as AI-generated content steadily comprises a larger and larger share of the total, this could create major problems for AI training as we know it. A recent study published in Nature found that LLMs that are trained on AI-generated content are susceptible to a phenomenon the researchers describe as “model collapse.” After several iterations, the models lose touch with the accurate data they were trained on and start to produce nonsense.
These are powerful reminders that AI content production requires the guiding hand of human oversight and frameworks that will help content creators observe the highest standards of quality, reliability and transparency. Although AI is becoming more powerful, oversight will likely become even more critical in the years to come. As I put it in a recent blog post, we’re witnessing a severe trust deficit across many of our most important institutions — a phenomenon that will naturally be even more pronounced with new technology like AI. This is why companies have to take extra steps to build trust in AI to fully realize the transformative impact it can have.
Building trust into AI content generation
Given problems like hallucination and model collapse, it’s no wonder that companies want to be capable of detecting AI content. But AI detection isn’t a cure-all for the inaccuracies and lack of transparency that hobble LLMs and other learned generative models. For one thing, this technology will always be a step behind the ever-proliferating and increasingly sophisticated forms of AI content production. For another, AI detection is prone to producing false positives that penalize writers and other content creators who use the technology.
Instead of relying on the blunt instruments of detection and filtering, it’s necessary to establish policies and norms that will increase the trustworthiness of AI-produced and enabled content: clear disclosure of AI assistance, verifiable attestation of human review, and transparency around AI training sets. Focusing on improving the quality and transparency of AI-generated content will help companies address the growing trust gap around the use of the technology — a shift that will allow them to harness the full potential of AI to enhance creative content. The marriage of AI with human expertise and creativity is an extremely powerful combination, but the valuable outputs generated by this type of hybrid content production are always at risk of being flagged by detection tools.
As AI becomes more integrated with digital ecosystems, the hazards of using the technology are increasingly pronounced. Regulations like the EU AI Act are part of a broad legal effort to make AI more safe and transparent, and we will likely see stricter rules in the coming years. But companies shouldn’t need to be coerced by stringent laws and regulations to make their AI operations more secure, transparent, and accountable. Responsible AI content production will give companies a powerful competitive advantage, as it will allow them to work with talented content creators who know how to fully leverage AI in their work.
The AI era has already led to a fundamental shift in how content is produced, and this shift is only going to keep accelerating. While this means there will be a whole lot of low-quality AI-generated content out there, it also means many writers and other content producers are entering an AI-powered creative renaissance and could solve bigger problems than were ever thought possible. The companies in the best position to capitalize on this renaissance are the ones that emphasize transparency, security, and human vetted expert curated data as they build their AI content, policies and strategies.
About the Author
Joshua Ray is the founder and CEO of Blackwire Labs, and has over 20 years of experience navigating the commercial, private, public, and military sectors. As a U.S. Navy veteran and seasoned cybersecurity executive dedicated to enhancing cyber resilience across industries, he has played an integral role in defending some of the world’s most targeted networks against advanced cyber adversaries. As the former Global Security Lead for Emerging Technologies & Cyber Defense at Accenture, Joshua played a pivotal role in driving security innovation and securing critical technologies for the next generation of the global economy.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Check us out on YouTube!