Heard on the Street – 4/18/2024

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Generative AI is not rendering Machine Learning (ML) obsolete. Commentary by Maitreya Natu, Chief Data Scientist at Digitate  

“As conversations around GenAI’s momentum continues, it’s important to keep in mind that GenAI will not render ML obsolete in the near future.

ML algorithms are still key in a variety of business-critical use cases that GenAI is not natively built for. For example, ML algorithms excel in learning from large datasets and address problems of pattern mining, anomaly detection, predictions, root-cause analysis, recommendation engine, optimization, and many others. These algorithms enable capabilities such as disease diagnosis in healthcare to fraud detection in banking to customer behavior analysis in retail to even self-driving vehicles in the automobile industry.

GenAI solutions are now enabling the ability of automatic code creation and app development, but GenAI in its present form cannot be a substitute for the ML algorithms. GenAI algorithms train on large datasets and use ML algorithms to create new content which can be text, images, music, or even code. GenAI thus excels in creative content generation and creating personalized experiences. It enables a variety of use-cases such as marketing and sales content generation, graphic design, conversational engines, customer support, among others. ML solutions are still required to address unsolved problems or improve current solutions in the space of descriptive, diagnostic, predictive, and prescriptive intelligence. 

While GenAI is not mature enough to replace the ML investments, it can certainly act as an accelerator to enhance the effectiveness and adoption of the ML algorithms. ML algorithms have a high dependency on the availability of large volumes of training datasets, so GenAI’s ability to create synthetic data can be very helpful in training the AI models for easier adoption by business users.

For example, there is often a need to bring human expertise into the loop to realize ML solutions in practice. For instance, human interference is required to convert analytics observations into actionable recommendations, or to guide an AI engine in case of exceptional conditions, or to confirm a prediction made by an AI engine. GenAI and ML when used together can bridge the gap between machine intelligence and human expertise.

Organizations should continue their ML investment but should realign their strategy to keep GenAI in mind.”

US AI Mandates. Commentary by Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai

“The introduction of binding AI requirements for U.S. federal agencies is a significant step that aligns with the global movement towards increased AI governance and regulation. This comes on the heels of the EU AI Act, which has set the stage for a comprehensive risk-based regulatory framework for artificial intelligence systems. As nations grapple with the societal implications of AI, coordinated efforts to establish guardrails are crucial for fostering innovation while upholding trust, core values and human rights.

By mandating AI leadership roles, the regulations ensure there are domain experts intimately involved in risk assessments and the crafting of tailored agency policies. Having this centralised AI oversight helps uphold the standards and guardrails needed as agencies increasingly lean on AI tools for mission-critical operations impacting citizens’ rights, opportunities, and well-being. It establishes a locus of accountability essential to harnessing AI’s capabilities responsibly and avoiding unintended negative consequences from ill-conceived adoption.”

Fraud in the Era of Generative AI. Commentary by Dan Pinto, CEO and co-founder of Fingerprint

“The rise of generative AI rings in a new set of concerns for fraud detection. As the technology becomes more widespread, sophisticated fraudsters are doubling down on web scraping and social engineering attacks to steal information. With generative AI, bad actors can more efficiently steal information and train large language models (LLMs) to scrape personal information and intellectual property. 

While web scraping isn’t illegal, fraudsters often use data for fraudulent attacks like phishing, account takeovers and credential stuffing. Even well-intentioned web scraping can harm consumers by slowing down bandwidths, resulting in longer load times and disrupted services. Further, duplicate content can be harmful to search engine optimization. 

Businesses should strengthen their fraud detection with device identification to address generative AI-related fraud. Two-factor authentication helps, but device intelligence takes fraud prevention to the next level by distinguishing between bots and legitimate human users. Bots often send signals like errors, network overrides and browser attribute inconsistencies that differ from legitimate web users. However, device intelligence monitors and detects suspicious actions associated with bots or other fraudulent behavior like repeated attempted logins or account creation with compromised information.” 

AI innovations will transform cybersecurity for the better. Commentary by Vinay Anand, CPO of NetSPI

“Every new paradigm shift brings along a specific set of challenges, and AI is no different. Abuse and misuse aside, AI will make cybersecurity issues more addressable in the long run and help address the industry’s daunting skills shortage. Through the right training models, AI will become a massive force multiplier and tackle the influx of data sitting in organizations for analysis. Currently, data is too complex and multiplying too rapidly that threats are going undetected and unpatched. AI can get organizations the results they need faster –  ultimately allowing teams to be more proactive with their security practices – and help identify the most critical vulnerabilities that matter most to organizations, which will save them millions of dollars from an impending breach. A year from now, if you ask any security leader, they’ll tell you that today’s AI innovations have changed the nature of what we do in cybersecurity for the better.”

The Four Pillars of Principled Innovation. Commentary by Doug Johnson, vice president of product management, Acumatica

“In an era where the clamor for artificial intelligence (AI) innovation reverberates across boardrooms, businesses are under pressure from stakeholders to rapidly develop AI-powered solutions to stay ahead of the curve. In fact, 46% of board members recently pinpointed AI innovation as their top concern, surpassing all other priorities.

This push for innovation raises a pivotal question for businesses: How can we ensure our efforts in AI deliver tangible value for users? The key to navigating this challenge lies in adopting a principled approach to innovation that balances the drive for cutting-edge solutions with the imperative to meet customer needs.

Below are four pillars decision-makers should consider to ensure a principled, pragmatic approach to innovation.

Be Practical. Businesses must prioritize practicality, focusing on developing solutions that address real-world challenges and offer valuable benefits rather than pursuing innovation for its own sake. When delivering features, the User Interface (UI) must be transparent and clear regarding the use of AI.

Be Customer-Centric. Core to practical innovation is being customer-focused, placing their needs at the heart of innovation efforts. By understanding what customers need, technology vendors can create AI solutions that resonate strongly with target audiences.

Be Thoughtful. It’s crucial to take a phased approach to innovation. Rushing to market with untested or underdeveloped technologies can lead to disappointing outcomes. A more refined process allows for improving solutions over time, reducing risks and enhancing the quality of the final product. 

Be Responsible: Establish clear internal guidelines for innovation initiatives. This process includes providing teams with the necessary resources, setting benchmarks for success, conducting thorough testing and soliciting user feedback. Ensure AI features generate quality results while avoiding discrimination and bias. Validate that learning models respect data security policies to ensure there’s no unauthorized data sharing. Make sure that AI results won’t change your data without your consent. Externally, be clear with customers on product roadmaps and future initiatives to align with their evolving needs.

By adhering to these pillars, businesses can effectively navigate the terrain of AI innovation, ensuring that their efforts lead to impactful outcomes that differentiate them in the marketplace.”

AI Agents Emerge As The Ultimate Analysts. Commentary by Trey Doig, co-founder / CTO, Echo AI

“There is still a broad perception that LLM’s primary function is content creation, such as writing papers or generating images. But there’s a far more fascinating, yet underexplored application: AI agents as data analysts.

AI agents, though fundamentally simple,extend far beyond basic computational processes.  Originating from projects like Auto-GPT, which demonstrated that an agent could be developed with as little as 80 lines of Python code, the term ‘AI agent’ now encompasses a wide range of technological capabilities. While their simplicity belies their potential impact on the surface, agents hold immense potential for automating and refining analytical tasks traditionally performed by humans.

While the real-life application of AI agents in enterprises is still finding its footing, their potential to act as virtual data analysts—tagging, categorizing, and synthesizing large volumes of information—cannot be ignored. They mark a pivotal shift in managing and interpreting data, offering a level of efficiency and insight previously unattainable with human analysts alone.”

Redefining Work: How Taskification and Technology are Shaping the Future of Jobs. Commentary by Dr. Kelly Monahan, Managing Director of Upwork Research Institute at Upwork 

“AI doesn’t want your job, but it does want to help you with the repetitive and mundane tasks you do every day. 

After more than 150 years, the very concept of a “job” is changing. Traditionally, jobs have been static and discrete, with straightforward responsibilities and select skills required; however, forces like distributed work and the rise of AI are upending the traditional concept. It’s no longer about what is listed in a job description, but the skills and tasks needed to get the work done. Some of these tasks are better suited for humans, others are better for AI.

Taskification, the process of breaking a job into smaller and discrete tasks that can be easily managed, measured and executed, is helping organizations redesign the flow of work to succeed in this new reality. This approach has gained popularity over the years in various sectors, as it allows for more flexibility in how work is organized and performed; however, it will become even more prevalent due to emerging technologies like AI.

The rise of AI is enabling companies to automate routine tasks and focus human talent on higher-value work. Yet, much of this new higher-value work is emergent and requires redesigning work around business problems and customer needs, not jobs. By starting with a job role, rather than a problem, it is hard to make the work and skills visible that are required. In addition, the fear-based narrative of AI taking over jobs persists with this outdated lens. Instead, leaders who are actively reshaping and designing job roles around the essential tasks and skills required, foster a culture where talent development aligns with working alongside technologies. 

The future of work is finally here. A future where both people and technologies work together to solve problems rather than simply do a job.”

What’s blocking the seamless adoption of AI in business processes? Messy data. Commentary by Tina Kung, co-founder and CTO of Nue.io.

“Despite the hype we’ve seen around generative AI (genAI) over the past year or so, its adoption in enterprise has been noticeably sluggish — with just one-third of global firms incorporating it into their operations. This partly comes down to the fact that leaders have been approaching the tech in the wrong way. While many businesses see AI tools as a means to help them sort through their hefty amounts of internal data, the truth is that AI isn’t, in fact, a panacea to baked-in, inefficient processes. Feeding poor-quality data to AI systems won’t yield high-quality output. Data needs to be properly cleansed and integrated before AI comes into the picture. The proliferation of diverse SaaS solutions means that many businesses nowadays employ an array of different tools across sales, quoting, billing, marketing, and other functions, with each generating distinct datasets. This creates a huge visibility problem for businesses, as customer revenue information is scattered across multiple systems that don’t speak the same language. 

But AI alone can’t stitch together disparate data. For starters, genAI’s ability to solve complex math and data transformation problems is still extremely limited. It’s also prone to hallucinations — that is, inaccurate or nonsensical outputs — making it a highly risky technology to use when handling complex financial systems data. Even a minor error can have a domino effect, causing financial discrepancies and disrupting a number of wider processes. Since financial data processes require 100% accuracy, genAI’s current limitations make it unsuitable for handling these tasks without human intervention. To get the most out of AI, companies should limit the number of SaaS tools they use to ensure that data is clean and consistent. However, opting for “one-size-fits-all” solutions may sacrifice specific functionalities. Ultimately, businesses need to identify the metrics that best gauge success (which will look different for every company), and then align their systems, processes and tools accordingly.”

Developers in the job market. Commentary by Keith Pitt, co-founder and CEO of Buildkite

“Developers in the job market should look first and foremost at company culture, as overall developer happiness is a key indicator that a company provides their developers with tools that set them up for success. If sentiment amongst developers is low, it often means their company relies on outdated tools that slow build and test times, and as a result, places undue pressure on developers who are limited in their capabilities. 

In a difficult macro-economic environment, many tech companies are prioritizing quick deployment cycle duration above all else. What these companies don’t realize is that relying on antiquated legacy tools is plummeting developer morale. When old processes are a time suck, management can use developers as a scapegoat for long turnover times. In addition to stress from unsatisfied management, developers, who are inherently innovative, have their creative spark extinguished by wasted compute and loss of energy. This results in a misuse of talented developers and decline in their satisfaction – all which could have been avoided with appropriate tools that promote developer-centric cultures.

Developers seeking new opportunities that prioritize their fulfillment should scour the internet and utilize their networks to determine the overall happiness of developers at a company, as this can be a telltale sign as to whether a company is utilizing future-facing tools, or if they are living in the past. Nothing says ‘my company offers the best tools’ like positive Blind and Glassdoor reviews.”

Leveraging smart data to better the planet, or how data is utilized to help refuel the world. Commentary by Gavin Patterson, Chairman, Kraken, part of the Octopus Energy Group 

Worldwide, the systems we use to manage, distribute, and optimize energy are decades behind where they should be. This is largely due to critical grid infrastructure hinging on a patchwork of legacy systems—making it ill equipped to match the rapid adoption of heat pumps, EVs, rooftop solar, and other electrified technologies. While governments and industry leaders continue to fast-track grid decarbonization and modernization, utilities will need new solutions to intelligently manage the energy grid of the future. 

As a result of strides in data management and analysis technology, utilities have the opportunity to deploy solutions capable of flexibly managing and optimizing energy generation, distribution and residential consumption. This comprehensive understanding of energy usage patterns allows for more informed decision-making and optimal energy distribution. Next-generation software solutions with consistent energy data monitoring and analysis capabilities are critical for utilities as they navigate this unprecedented grid strain. By analyzing data points from distributed energy sources, utilities will gain deeper insights into customer energy needs and behaviors. Next-generation platforms ensure utilities offer up-to-date solutions and personalized support, unlike legacy systems that may struggle to keep pace with evolving grid needs.

Over the last few years, I’ve seen the United Kingdom become a pivotal testing ground for this early grid transformation. We’ll only see more of this as utilities across Europe, Asia, and North America continue to turn to the UK for its technology solutions that digitize and decentralize the grid at scale.” 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW