Heard on the Street – 6/3/2024

Print Friendly, PDF & Email

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

OpenAI’s GPT-4o Delivers for Consumers, but What About Enterprises? Commentary by Prasanna Arikala, CTO of Kore.ai

“These models should be trained by enterprises to generate outputs within predefined boundaries, avoiding responses that fall outside the model’s knowledge domain or violate established rules. Platform companies should focus their efforts on developing solutions that facilitate this controlled model building and deployment process for enterprises. By providing tools and frameworks for enterprises to build, fine-tune, and apply constraints to these models based on their requirements, platform companies can enable wider adoption while mitigating potential risks. The key is striking a balance between harnessing the power of advanced language models like GPT-4o and implementing robust governance mechanisms with enterprise-level controls. This balanced approach ensures responsible and reliable deployment in real-world business scenarios.”

The benefits of AI in software development. Commentary by Rob Whiteley, CEO at Coder

“A growing concern is ‘productivity debt’ – the accumulated burden and inefficiencies keeping developers from effectively utilizing their time for coding. This is especially true for developers in large enterprises, where productivity can be as little as 6% of their time engaged in coding tasks. Generative AI has emerged as a transformative solution for developers, both at the enterprise and individual level. While AI isn’t meant to replace human input entirely, its role as an assistant significantly expedites coding tasks, particularly the tedious, manual ones. 

The benefits of AI in software development are clear: it speeds up coding processes, reduces errors, enhances code quality and optimizes developer output. This is especially true when generative AI fills in the blanks or autocompletes a line of code with routine syntax – eliminating potential for typos and human error. AI can generate documentation and comment on the code – tasks that tend to be extremely tedious and take away from writing actual code. Essentially, generative AI completes code faster for a direct productivity gain, while reducing manual errors and typos – an indirect productivity gain that results in less human inspection of code. It also improves overall developer experience, keeping developers in flow. Despite generative AI’s enormous promise in the software development space, it’s crucial to approach AI outputs critically, verifying their accuracy and ensuring alignment with personal coding styles and company coding standards or guidelines.

It’s important to recognize that AI augments rather than replaces developers, making them more effective and efficient. By prioritizing investments that benefit the broader developer population, enterprises can accelerate digital transformation efforts and mitigate productivity debt effectively. Generative AI holds immense promise for enhancing productivity – not only for developers, but for entire enterprises. It reshapes workflows and achieves dramatic time and cost savings across the enterprise. Embracing AI as an interactive and supplementary tool empowers developers to be more productive, get in ‘the flow’ easier and spend more time coding and less time on manual tasks.”

Italy to deploy supercomputer to study effects of climate change. Commentary by Philip Kaye, Co-founder, and Director of Vesper Technologies

“The deployment of new supercomputers like Italy’s Cassandra system underscores the growing global demand for the latest high-performance compute (HPC) hardware, capable of tackling complex challenges such as climate change modelling and prediction. However, meeting these intensifying HPC requirements is becoming increasingly difficult with traditional air-cooling solutions. It’s fitting, then, that a supercomputer being used by the European Centre on Climate Change is utilizing the latest liquid cooling innovation to limit the environmental impact of the supercomputer itself. 

As we enter the exascale era, liquid cooling is rapidly transitioning to a mainstream necessity, even for CPU-centric HPC architectures. Lenovo’s liquid-cooled Neptune platform exemplifies this trend, circulating liquid refrigerants to efficiently absorb and expel the immense heat generated by cutting-edge CPUs and GPUs. This enables the latest processors and accelerators to operate at full speed within dense data center environments.

The benefits of reduced energy consumption, lower environmental impact, and higher computing densities afforded by liquid cooling are making it an integral part of HPC designs. As a result, robust liquid cooling solutions will likely be table stakes for any organization looking to future-proof their HPC infrastructure and maintain a competitive edge in domains like scientific simulation and climate modelling.”

Big Data Analytics: Enable the move from spatiotemporal data to quickest event detection. Commentary by Houbing Herbert Song, Title: IEEE Fellow

“Identifying and forecasting unusual events has been a major issue in a variety of fields, including pandemic, chemical leak, cybersecurity, and safety. Effective responses to unusual events will require quickest event detection capability.

By leveraging massive spatiotemporal datasets to analyze and understand spatiotemporally distributed phenomena, big data analytics has the potential to revolutionize algorithmically-informed reasoning and sense-making of spatiotemporal data, therefore enabling the move from massive spatiotemporal datasets to quickest event detection. Quickest detection, refers to real-time detection of abrupt changes in the behavior of an observed signal or time series as quickly as possible after they occur.

This capability is essential to the design and development of safe, secure, and trustworthy AI systems. There is an urgent need to develop a domain-agnostic big data analytics framework for quickest detection of events, including but not limited to pandemic, Alzheimer’s Disease, threat, intrusion, vulnerability, anomaly, malware, bias, chemical, and Out of-distribution (OOD).”

X’s Lawsuit Against Bright Data Dismissed. Commentary by Or Lenchner, CEO, Bright Data

“Bright Data’s victory over X makes it clear to the world that public information on the web belongs to all of us, and any attempt to deny the public access will fail. As demonstrated in several recent cases including our win in the Meta case.

What is happening now is unprecedented, and has profound implications in business, research, training of AI models, and beyond.

Bright Data has proven that ethical and transparent scraping practices for legitimate business use and social good initiatives are legally sound. Companies that try to control user data intended for public consumption will not win this legal battle.

We’ve seen a series of lawsuits targeting scraping companies, individuals, and nonprofits. They are used as a monetary weapon to discourage collecting public data from sites so conglomerates can hoard user-generated public data. Courts recognize this and the risks it poses of information monopolies and ownership of the internet.”

Making the transition of VMWare. Commentary by Ted Stuart, President of Mission Cloud

“Organizations relying on VMware environments can see significant benefits by transitioning to native cloud services.  Beyond potential cost savings, native cloud platforms offer enhanced control, automation, architectural flexibility, and reduced maintenance overhead.  Careful planning and exploring options like managed services or targeted upskilling can ensure a smooth migration process.” 

Adapting AI Platforms to Hybrid or Multi-Cloud Environments. Commentary by Bin Fan, VP of Technology, Founding Engineer, Alluxio

“AI platforms can adapt to hybrid or multi-cloud environments by leveraging a data layer that abstracts away the complexities of underlying storage systems. This layer not only ensures seamless data access across different cloud environments but also saves egress costs. Additionally, the use of intelligent caching mechanisms and scalable architecture optimizes data locality and reduces latency, thereby enhancing the performance of the end-to-end data pipelines. Integrating such a system not only simplifies data management but also maximizes the utilization of computing resources like GPUs, ensuring robust and cost-effective AI operations across diverse infrastructures.” 

AI and machine learning in software development. Commentary by Tyler Warden, Senior Vice President, Product at Sonatype

AI and Machine Learning have established themselves as transformative tools for software development teams; and most organizations are looking to embrace AI/ML for many of the same reasons they’ve embraced open source components: faster delivery of innovation at scale. 

We actually see a lot of parallels between the use of AI and ML today and open source years ago, which offers an opportunity to implement our expertise from lessons learned from open source to ensure safe, effective usage of AI and ML. For example, in the beginning, leadership didn’t know how much open source was being used – or where. Then, Software Composition Analysis solutions came along to evaluate their security, compliance and code quality. 

Similarly, organizations today want to embrace AI/ML but do so in ways that ensure the right mixture of security, productivity and legal outcomes. To do so, software development teams must have tools that identify where, when and how they’re using AI and ML.” 

AI In Retail. Commentary by Piyush Patel, Chief Ecosystem Offier of Algolia

“The role of AI in retail and ecommerce continues to grow at a rapid pace. In fact, a recent report finds 40% of B2C retailers are increasing their AI search investments to improve the retail journey and set themselves apart from the competition. From internal efficiency to better experiences for customers, these investments will be well received by consumers. An Algolia consumer survey indicates that 59% of U.S. adults believe the wider adoption of AI by retailers will bolster shopping experiences. However, AI skeptics remain a challenge, to boost trust in AI-driven shopping tools, retailers must be prepared to educate consumers on AI’s benefits and how they’re gathering training data for AI models as well as the data tracked and stored for personalization.”

The AI Revolution: Rehab Therapy Can Expect Reinforcement, Not Replacement. Commentary by Brij Bhuptani, Co-founder and Chief Executive Officer, SPRY Therapeutics, Inc.

“Clinical healthcare professionals are more insulated from the risks of replacement by AI than other professions. Specialties like rehab therapy are even less prone to displacement caused by technology. Yet fears persist that “the robots are coming for our jobs” and that human workers will become obsolete.

As a technologist intimately familiar with the transformation currently taking place in healthcare operations, I can confidently say: AI isn’t here to replace therapists but to augment them.

A therapist’s job requires them to function at an advanced level across many human skills that machines won’t replicate soon. Intuition and experience play a key role, and that isnʼt going to change. The integration of AI into clinical practice also will lead to new specializations, as the need grows for staff focused on AI-enhanced diagnoses and data-driven medicine. Rehab therapists also will support patients as they navigate a range of new AI-assisted treatment options.

While AI can’t replace rehab therapists, it can help them to do their work more efficiently and to provide better care. From time-intensive front-desk tasks like insurance authorization, to clinical charting, to compliance-driven services like billing, AI will make all of these processes more efficient, accurate and secure. Along the way, it will allow rehab therapists to improve patient outcomes, as they are free to invest their time in getting to the bottom of complex, nuanced patient issues, while spending less time on busywork.

As with past Industrial Revolutions (the first in mechanization, the second in production, the third in automation), the Fourth Industrial Revolution — the AI Revolution — will be equally disruptive. Already we see the signs. But ultimately, it will bring about net gains, not only in the size of the workforce but also in the quality of care and outcomes it will help clinical professionals to achieve.”

How easy should it be to overrule or reverse AI-driven processes? Commentary by Dr. Hugh Cassidy, Chief Data Scientist and Head of Artificial Intelligence at LeanTaaS

“Humans can offer critical thinking and contextual understanding that AI may lack, especially in nuanced and complex situations. In critical applications, human oversight should be significant, with AI outputs treated as initial drafts or recommendations subject to human review and override. The mechanism for overruling AI-driven processes should be straightforward, efficient, and trackable. It should be designed to allow human intervention with minimal friction, enabling quick decision-making when necessary. User interfaces should be intuitive, providing clear options for human operators to override AI decisions. Additionally, AI systems should be equipped with robust logging and auditing mechanisms to document when and why overrides occur, facilitating continuous improvement.”

Maintaining human oversight of AI output or decisions. Commentary by Sean McCrohan, Vice President of Technology at CallRail

“Setting aside a few areas where specialized AI has delivered truly superhuman performance (protein folding and material science, for instance), current-generation generative AI performs a lot like an 11th grade Honors English student. It does an excellent job at analyzing text, it makes capable inferences based on general knowledge, it provides plausibly presented answers even when wrong, and it rarely considers the implications of its answer beyond the immediate context. This is both amazing with regards to the pace of development of the technology, and concerning in cases where people assume it will be infallible. AI is not infallible. It is fast, scalable, and it is reliable enough to be worth the effort of using it, but none of these guarantee it will provide the answer you want every time – especially as it expands into areas where judgment is increasingly subjective or qualitative.

It’s a mistake to consider the need to review AI decisions as a new problem; we have built processes to allow for the review of human decisions for hundreds of years. AI is not yet categorically different, and its decisions should be reviewed or face approval hurdles appropriate to the risk faced if an error is made. Routine tasks should face routine scrutiny; decisions with extraordinary risk require extraordinary review. AI will reach a point in many domains where even review from an experienced human is more likely to add errors than discover them, but it’s not there yet. Before that point, we will pass through a period in which review is necessary, but an increasing percentage of review can be delegated to a second tier of AI tooling. The ability to recognize a risky decision may continue to outpace the ability to make a safe one, leaving a role for AI in flagging decisions (by AI or by humans) for higher-level review.

It is critical to understand the strengths and weaknesses of a particular AI tool, to evaluate its performance against real-world data and your specific needs, and to spot-check that performance in operation on an ongoing basis…just as it would be for a human performing those tasks. And just as with a human employee, the fact that AI is not 100% reliable or consistent is not a barrier to it being very useful, so long as processes are designed to accommodate that reality.”

Generative AI capabilities to consider when choosing the right data analytics platform. Commentary by Roy Sgan-Cohen, General Manager of AI, Platforms and Data at Amdocs

“Technical leaders should prioritize data platforms that offer multi-cloud and multi-LML strategies with support for various Generative AI frameworks. Cost-effectiveness, seamless integration with data sources and consumers, low latency, and robust privacy and security features including encryption and RBAC are also essential considerations. Additionally, assessing compatibility with different types of data sources, along with the platform’s approach to semantics, routing, and support for agentic and flow-based use cases, will be crucial in making informed decisions.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*