Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.
RAG brings GenAI to the enterprise. Commentary by Jeff Evernham, Vice President of Strategy and Solutions at Sinequa
“The advent of LLMs and with it, generative AI has ushered in a new era of technological innovation, but generative AI has several shortcomings that prevents its use in most enterprise applications. In 2023, pairing search with GenAI in a technique called RAG emerged as the solution to these challenges, mitigating weaknesses and opening up a broad range of opportunities to use generative AI in fact-based scenarios within businesses. The promise of generative AI to revolutionize enterprise applications through RAG is immense, giving employees a superhuman assistant so they can leverage all corporate knowledge simply by having a conversation. Enterprises that swiftly adopt and deploy robust RAG-powered assistants will have an edge over companies that don’t, harnessing the potential of GenAI to drive innovation, enhance productivity, and maintain a competitive edge in the evolving digital economy. The best RAG solutions require not just a capable GenAI but also a robust search capability, so picking the right search platform is key.”
Building Robust Tech Foundations for Seamless Data Flow. Commentary by Shobhit Khandelwal, Founder & CEO of ShyftLabs
“In an era where data is the new oil, ensuring a seamless data flow is imperative for any business, particularly in the retail and eCommerce sector.
The cornerstone of this approach lies in the integration of advanced technologies like big data, AI, machine learning and deep learning. These tools can decode vast volumes of unstructured data, transforming them into precise and actionable insights that drive business decisions. However, the key to truly harnessing their power is the creation of a strong technological infrastructure that ensures smooth and efficient data management.
This involves setting up scalable databases, implementing secure data pipelines, and utilizing cloud computing services for storage and processing. It also encompasses the use of data science methodologies to extract valuable information, predict trends, and optimize operations. While these processes may seem complex, they are essential for the successful operation of any data-driven organization.
The journey towards building a robust tech foundation for seamless data flow may be challenging, but the rewards it brings in terms of operational efficiency, informed decision-making, and ultimately, business success, make it incredibly worthwhile.
A well-structured tech foundation isn’t just about managing data; it’s about truly understanding it, and then leveraging it to propel your business forward.”
Training Predictive Models on Encrypted Data using FHE. Commentary from Andrei Stoian, Machine Learning Director at Zama
“The implications of Fully Homomorphic Encryption (FHE) stretch far into the future of ML, unlocking use-cases where data privacy isn’t just a requirement but a cornerstone. FHE is a technique that enables data to be processed blindly without having to decrypt it. By enabling the training of machine learning models on encrypted data, FHE introduces a new era of privacy protections in collaborative environments: entities can enrich their models by leveraging the data of others, without ever compromising the integrity and confidentiality of the information shared. This not only safeguards privacy but also fosters a culture of trust and cooperation across industries where privacy concerns have traditionally hindered progress.”
SEC fines for AI washing. Commentary by Toby Coulthard, CPO of Phrasee
“AI-washing is pervasive in the marketplace. Factset just did some analysis of S&P 500 earnings calls and 179 cited the term “AI” during their earnings call for the fourth quarter. This number is well above the 5-year average of 73 and the 10-year average of 45.
The cause is twofold; AI is one of the only areas of tech right now where investors prioritize growth over value, meaning they’re weighing revenue over near-term profits. In a high-interest rate environment it would naturally behoove any business to try to associate themselves with the AI sector as much as possible to maintain or inflate their value. The second challenge is the nebulous definition of AI; There isn’t a clear definition of AI – whether it’s the use of LLM, neural networks, machine learning, or just an application of data science. This gives businesses a lot of latitude when it comes to being able to associate themselves with AI.
The problem is that both intrinsic motivation to do something to preserve or inflate market capitalization combined with an under-defined concept leads to a big gray area on what is appropriate or not. A good litmus test is to see which companies were talking about AI prior to ChatGPT’s release, and which talk about it after the fact. Until the marketplace really defines AI in a meaningful way, or until investors weigh AI in a more balanced way, I don’t expect it to slow down.”
The do’s and don’ts of AI code review to maximize efficiency and minimize error in your team’s workflow. Commentary by Kırımgeray Kirimli, President of Flatiron Software Co.
“If you have yet to integrate AI-powered coding tools into your workflow, you’re probably looking to join the 92% of U.S. developers who already have. What’s more, AI code review has evolved to become a powerful ally for teams to create more efficient and innovative coding environments. The AI ecosystem will continue advancing, so professionals must accept the change and leverage these tools to stay ahead. Before introducing a new component to a collaborative workspace, it’s important to consider how to prepare best and encourage your team to use an evolving technology that is reinventing the industry.
While we can’t eliminate the risks and challenges of adopting AI technology – such as an expected increase in data breaches – there are tools and practices you can adopt to better safeguard those systems against malicious intent. Developers interested in incorporating the tool into their systems should first identify which code review applications offer security measures embedded in their overall functionality. Create a coding/work environment that exercises the same caution through privacy policies and clear protocols for reporting issues, so your team knows when to pinpoint vulnerabilities. It’s also crucial to ensure collaboration takes center stage, not only when addressing security concerns but also as a fundamental requirement for software professionals to succeed. AI code review has shown its value when combined with software coding platforms, like GitHub or JIRA, by revolutionizing code review efficiency for teams working collectively on a project. Finally, keep in mind that AI code is only as good as the training it receives. As AI code review evolves to solve solutions in workflow, it’s also created new obstacles – most notably, dependency. To encourage analytical thinking that minimizes reliance, collectively decide on techniques, like cognitive forcing functions or customized suggestions, that can be incorporated into the coding process. Foster a supportive environment that uses the tool effectively and responsibly. The impact of not working collaboratively causes an unhealthy balance between AI synergy and dependency. A good reminder of why collaboration needs to be prioritized to maintain quality code — even between man and AI.”
AI in call centers. Commentary by Dave Hoekstra, Product Evangelist, Calabrio
“I think we can say the buzz around AI has become a constant hum. We are seeing more and more organizations move away from thinking of AI as the latest shiny new toy to realizing tangible benefits.
In the contact center world, AI’s big impact behind the scenes – enhancing the work lives of agents and leaders – leads to improved customer interactions. AI boosts agent and manager productivity, refines scheduling and forecasting accuracy, monitors overall contact center performance, predicts customer behavior, and adds a touch of chatbot charisma.
However, it’s essential to recognize that concerns exist amid the enthusiasm. Managers are apprehensive about AI’s influence on agents’ mental health and training needs. The noteworthy aspect of AI is its role as a supportive companion, rather than a job thief. It’s the sidekick that makes our work lives easier. Successful integration of AI into the contact center landscape will require organizations to formulate a strong game plan for navigating these challenges and ensuring success.”
Navigating the Rise of Quantum Computing. Commentary by Nathan Vega, Vice President, Product Marketing and Strategy, Protegrity
“Today’s digital landscape faces a significant challenge: ensuring security systems are maintained and protecting data. The rise of quantum-resistant data security is critical, given that conventional encryption methods are vulnerable to the enhanced processing power of quantum computing. While current algorithms may take years to crack with traditional computing power, quantum computing could render them obsolete, posing substantial risks to businesses, universities, governments, and other entities relying on secure data management.
The emergence of quantum computing is a double-edged sword, promising groundbreaking advancements in data security and introducing formidable risks. Proactive measures are necessary to safeguard data for the future. More and more organizations are beginning to advocate the early adoption of quantum-resistant cybersecurity solutions, stressing the importance of preparation before the commercial availability of quantum computers.
This urgency is exacerbated by the fact that terrorist networks and malicious actors have access to advanced technologies paralleling those used by legitimate businesses. While businesses utilize these tools for operational enhancements and customer experiences, threat actors exploit them for nefarious purposes. There are concerns that such actors will already gather data for decryption once quantum computers become more accessible.
In anticipation of the quantum revolution and to address the threat it poses; companies must assess their cybersecurity infrastructure for vulnerabilities related to quantum computing and implement quantum-resistant solutions now for long-term data security. While some traditional security measures will become obsolete, others, like tokenization, promise against evolving threats. Tokenization substitutes real data with randomized tokens, providing robust security that is not easily compromised by quantum computing. Moreover, tokenization facilitates seamless data integration across platforms, empowering transformative initiatives in AI, machine learning, and analytics while fortifying the foundation of data security.”
Amid the threat of cyberattacks, humans must remain central to the solution. Commentary by Patrick Hayes, Chief Strategy and Product Officer, Third Wave Innovations
“Artificial intelligence is hailed by many as a cure-all for everything that ails us, including preventing and addressing cyberattacks. But, as security professionals home in on AI tools to address this growing threat, caution is required, especially as bad actors deploy AI themselves.
Just as we’re using AI to fix grammar in our emails, AI tools are helping hackers refine their phishing emails or vishing phone calls to better lure victims. In fact, hackers with just a “basic grasp of English language” can now easily and realistically impersonate people with ChatGPT, according to a report from Europol, a European police agency.
And while businesses may be using Large Language Models, or LLMs, to write code or better support customers, attackers are manipulating LLMs to distribute misinformation to various data sources.
In this battle for digital data, cybercriminals have huge incentives to fully embrace AI and stay far ahead of the rest of us. The impact of cybercrime will likely total $9.5 trillion in 2024, Cybersecurity Ventures predicts. That’s a big payday for bad actors In response, organizations must get on the offensive. Technology, including AI tools, play a role in proactive efforts to thwart cyberattacks. But tech solutions aren’t the only answer — or even the main one.
Humans, not bots, offer the critical-thinking skills required to use AI in the right way and deploy it to solve and prevent cyberattacks. What’s more, we bring an added layer of intuition and adaptability to flag something that doesn’t feel right and quickly respond to it.
So, as we plow forward into this uncertain future amid ever-evolving cyberattacks, balance and discretion is advised. The best offense and defense must include leveraging the insights and pragmatism of real people to protect against the very real threat of cyberattacks.”
Use the Containers Playbook to Effectively Leverage AI at the Edge. Commentary by Francis Chow, Red Hat VP and GM, In-Vehicle Operating System and Edge
“AI is poised to disrupt every industry. We are seeing this happening in manufacturing, especially at the industrial edge.
The implementation of artificial intelligence in edge computing environments enables computation to be performed close to where data is stored and processed. This removes the cost and latency involved when data must move from edge devices to a centralized cloud computing facility or an offsite data center and then back again. For manufacturers, this means that the return on AI investment can be significant and realized quickly.
To get to this point, it’s critical that manufacturers break free from system architectures that are decades old and/or siloed. The applicability of cloud-native IT principles to more scalable and manageable OT systems means that IT and OT teams need to be working together to achieve the best results. With legacy systems, edge devices will likely lack the ability to take advantage of technology advancement such as AI. Container technology can help industrial organizations develop and deploy AI applications at the edge because they are lightweight and portable, and run efficiently and securely across a variety of devices and platform types. Containers’ modular nature also makes it easier for developers to quickly and efficiently iterate applications, which is important when issues do occur.
The Kubernetes container orchestration system is also an important tool for the successful deployment of AI at the industrial edge, but in a sort of mini form. The open source MicroShift project uses a consistent Kubernetes API to extend operational consistency and scalability for hybrid cloud deployments all the way to the edge. MicroShift can be placed in highly space-constrained spaces and run on extremely low-power hardware while enabling teams to leverage familiar tools and processes.
Just as container technology enabled organizations to break down application monoliths into portable, flexible microservices, so, too, will containerization help manufacturers transform their manufacturing systems with AI at the edge.”
Three Ways to Guarantee Successful Deployment of Artificial Intelligence. Commentary by Mubbin Rabbani, Chief Product Officer, CLARA Analytics
“For AI to be successful, three factors need to be addressed by IT teams right off the bat to avoid failure: (i) Failure to keep the human in the loop. Use AI to augment the workflow and enhance the employee experience vs. designing an AI application to replace them. Enterprises that approach AI as a human replacement generally see resistance and low adoption rates; (ii) Lack of open and consistent communication with the frontlines. Many employees are fearful of AI, and these fears need to be addressed. If employees don’t get comfortable and use the AI tools developed for them, this will lead to certain failures. Start with small steps to earn employee trust and build from there; and (iii) Limited data. Internally developed AI projects often fail because the underlying data is limited in volume and scope. AI needs to be trained on more data than might be available in your organization. Find trusted partners that can provide high-quality data in large volumes.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW