Heard on the Street – 4/13/2023

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Why is Observability so damn expensive, and what can you do about it? Commentary by Chris Cooney, Dev Evangelist at full-stack observability platform Coralogix 

Observability is expensive because of one word. Volume. We process more data now than we ever have. SaaS vendors have scaled organically to meet this challenge, and in doing so, have had to increase their fees. If a customer decides to build their own solution, instead of a high subscription cost, they’ve got an enormous upfront cost in developer time and the infinitely more terrifying cost of missed opportunities. So what can we do about it? There is a solid, 3-step process to cost optimization in the cloud. Step 1 is identification. Find the data that customers and teams are using and, most importantly, the data they’re not using. This can be done with OpenSearch query logs or Prometheus activity logging, for example. Step 2 is Demand-side Optimization. Block data you don’t need or transform logs into metrics when all you need is a numerical value. Step 3 is Storage Optimization. Create storage levels, that can be written in parallel, and a routing mechanism that allows you to decide which data needs the best, the medium, and the lowest cost storage.

When it comes to data, enterprises can’t wait for human error. Commentary by Andrew “Spike” Hepburn, Field CTO at Moogsoft

Device use and data sprawl have exploded over the past several years thanks to the pandemic’s escalation of digital transformation. We’re approaching the oft-discussed era of IoT 2.0, during which organizations and people will be more connected than ever. However, the unfortunate side effect here is that enterprises and their data will also be more vulnerable than ever thanks to fragmented data streams, increased endpoints and more overall users. And organizations are certainly not immune to these developments. At the enterprise level, modern data architecture is so complex and fragile that a seemingly harmless event change can result in catastrophic system failure — including devastating casualties like e-commerce checkout downtime. In this environment, enterprises that rely exclusively on humans to monitor and analyze their data are in for an abrupt and rude wake-up call. IT leaders must consider alternative methods to monitor complex infrastructures, such as AI-based event analysis platforms and leading monitoring tools powered by machine learning.

Kubernetes and SQL Server AG: A Match Made in Container Heaven (But Beware of Configuration Chaos!). Commentary by Don Boxley, CEO and Co-Founder, DH2i

Kubernetes provides a powerful platform for deploying and managing SQL Server AG clusters with increased availability, scalability, and flexibility. By leveraging containerization and orchestration capabilities of Kubernetes, IT professionals can deploy and manage SQL Server AG clusters more efficiently, with automatic failover, load balancing, and self-healing features built into the platform. However, deploying SQL Server AG clusters in Kubernetes requires careful planning and configuration to ensure optimal performance, security, and compliance. IT professionals must take into account the unique challenges of deploying SQL Server in a containerized environment, such as managing persistent storage, ensuring network connectivity, and configuring security and authentication. Today, what is required is an advanced solution that addresses the challenges of deploying SQL Server AG clusters in Kubernetes by offering features such as automated deployment, intelligent workload balancing, and simplified configuration management. With such a solution, IT professionals can leverage the benefits of containerization and orchestration while also ensuring the performance, security, and compliance of their SQL Server environments.

ChatGPT’s API. Commentary by Emmanuel Methivier, Business Program Director, Member Of Global Digital Catalysts at Axway

I believe that the recent announcements about the evolution of OpenAI’s API provide a perfect illustration of the API lifecycle. Indeed, it is the demand of API consumers that drives its evolution. API providers evolve their digital products through ongoing interaction with their consumers (developers), much like yogurt manufacturers evolve the taste and packaging of their product by conducting taste tests with their customers. An API, transformed into a digital product, must evolve. The challenge of this evolution is the survival of the service. That’s why it’s necessary to reuse all marketing strategies to apply them to digital products. This is the whole point of the marketplace, an ongoing interaction with consumers. Developers wanted OpenAI’s API to evolve and not only offer more specific fields of application but also allow for the submission of datasets to train the engine with subjects from a more restricted (and thus more precise) field of study by integrating the possibility of doing voice recognition (Whisper). OpenAI has listened and is proposing these new methods in its new version. One of the most groundbreaking features of OpenAI’s new API is its ability to generate human-like language through natural language processing (NLP). This is a major step forward in the field of artificial intelligence and has the potential to transform how we interact with machines. The API uses a new language model called GPT-3 (Generative Pre-trained Transformer 3) that has been trained on an enormous corpus of text data, allowing it to generate coherent and sophisticated responses to text prompts. Another significant feature is the API’s ability to perform a wide range of language-related tasks, such as summarization, translation, and question-answering. These capabilities are made possible by the advanced machine-learning algorithms used by the API. Additionally, the API is designed to be highly customizable and can be tailored to specific use cases. This means that developers can create applications that leverage the API’s advanced capabilities in a variety of contexts, from chatbots to content creation tools. Overall, the combination of GPT -3’s advanced language capabilities, the API’s flexibility and customizability, and its broad range of language-related functionalities makes it a truly revolutionary tool in the field of artificial intelligence.This ability for continuous improvement is a guarantee of success in this rapidly evolving field.

What ChatGPT and AI Means for Real Estate. Commentary by Laura O’Connor, President and COO of JPAR 

I’m always interested in what’s new and how we can use it to our advantage, most recently what the explosion of chatbots and AI technology means for the real estate industry. Change is happening rapidly and, as a leader, it is important to stay agile and adaptive, quickly identifying and integrating emerging trends and developments. In a way that translates to many industries, we strive to build a highly skilled team, ready to create the best customer experience for buyers and sellers. Not change for the sake of change, but for the sake of improvement. ChatGPT has shown chatbots and other AI technologies are here to stay, and my persistent curiosity led me to quickly embrace its potential to help not just with real estate but, as a working head of household, to help sustain a healthy work-life balance. After testing it out myself, I learned three things about these new AI tools. First, I saw the potential, in real estate and in other industries, for AI technology to save time, ensuring more time to better serve clients, produce a better product, and enjoy quality time with family. Second, it is important to train workers on proper use. We are doing the heavy lifting for our agents, researching how to best use AI technology. If properly used, natural language AI technology like ChatGPT, can help real estate agents prepare listing descriptions, client reports, marketing materials, and communications. However, as the rollout of other chatbots has shown, these technologies are not perfect. Using them blindly without crafting appropriate prompts and proofreading outputs can lead to factual and readability errors. AI tools can save drafting and research time but have not proven that they can be used without human supervision. As with all new developments, it is important to quickly implement training for how to best utilize these exciting tools. Third, AI tools should be used to strive for continuous improvement in agent and customer experience. AI tools are not all-knowing so if responses are not vetted it can diminish quality and credibility. We have already started training our affiliated owners and agents on how to best use AI tools to maximize their time, emphasizing clear priorities about what it can and can’t do. Before integrating it with customer service, we are training our employees to work out any potential pitfalls – real estate, like many other industries, is a relationship business. By understanding the pros and cons and implementing proper training, the benefits of AI tools can grow alongside this exciting technology.

ChatGPT Poses No Threat To Recruitment. Commentary by Sheilin Herrick, head of Technology Hiring Solutions for SHL

ChatGPT will not put software engineers out of a job, but it may change their role. Coding is just one part of software engineering. Plus, the code ChatGPT produces isn’t flawless. Engineers will still need to understand how code works to know whether ChatGPT is producing high-quality code. ChatGPT can be a helpful tool for software engineers whose jobs have historically included spending hours debugging code. Using a tool like ChatGPT means they’ll be able to spend their time thinking more strategically about how to solve software engineering problems, such as requirements gathering, how UI/UX will work, architecture and which APIs to use. Used correctly, ChatGPT could be a positive for many developers — it may broaden job responsibilities and improve job satisfaction.

Data streaming vs. batch processing – what’s best for a business? Commentary by Michael Drogalis, Principal Technologist, Office of the CTO, Confluent

A recent IDC study found that the amount of data generated globally is expected to double in size from 2022 to 2026. How can businesses keep up with this volume of data, and more importantly, use it to make critical business decisions? Historically, you might use batch processing to periodically reprocess your entire data set, including any new data that you accumulated since the last run. This approach works and is generally easy to do, but can be slow due to the increased redundancy between runs. A newer approach is to use streaming, which processes data incrementally as soon as it arrives. Whether a company should use batch processing or data streaming primarily depends on two factors. The first is latency. If your business doesn’t need frequent updates (like a payroll system that only needs to run once in a while), batch processing makes a lot of sense. On the other hand, if your business needs are more time sensitive (think shipping information or personalized recommendations), streaming is a better fit. The other factor is whether data is occasionally being used as a resource or needed to drive business outcomes. Batch processing easily fits the role of dashboards that people look at once in a while to make decisions. Streaming, on the other hand, is more suitable for automatically carrying out a customer protection plan in the event of suspected fraud. Depending on the intended use case, both processes can bring tremendous value to a business.

How ChatGPT fits into enterprise integration. Digibee CTO and Co-founder Peter Kreslins, Jr.

ChatGPT has changed the way that every industry does business, mainly because of the disruptive use of how we can gather information. The potential applications of ChatGPT are endless; companies are engaging the customer in new ways, teams are enhancing workflows, and developers are using ChatGPT to assist with coding. The integration space is no different. iPaaS (integration platform-as-a-service) technologies are helping organizations integrate and connect data, people and applications across the business. By tapping machine learning and ChatGPT, a company’s systems and applications can be analyzed and optimized almost instantaneously. This not only reduces or eliminates consulting time and expenses, but radically changes the landscape of enterprise integration, because nearly anyone can do it using tools, saving companies time and money — precious resources in this highly fluctuating and challenging economy. One of the more relevant and tactical use cases for integration is the ability to create, maintain, and consume system design documentation. Developers and system architects can build system architecture from scratch for developing software with this conversational artificial intelligence. Within integration, this helps understand some of the most complex ecosystems, without having to document, research, and analyze a multitude of internal (and sometimes external) systems.

Data Strategy Trends in Healthcare. Commentary by Jeff Robbins, Founder and CEO, LiveData

There are few verticals experiencing the dynamic tensions inherent in a high-performance cloud-native data strategy as the healthcare industry. Enabling decision makers with current and accurate data while navigating the many regulatory, security, and privacy constraints is often still somewhat aspirational, but the trend is underway. For example, let’s look at hospitals. They need to do more with the resources they have. The nursing shortages in the news are more than a bottom-line financial burden — they hit at the heart of delivering needed care.  Moving actionable analytics to the cloud gives hospital administrators the platform to improve many factors in a nurse’s workplace, contributing to job satisfaction and, critically, retention. CIOs need to partner with industry vertical experts, who can harness broadly accepted horizontal technologies (e.g., Tableau) in a safe way for their particular sector. Failing to prepare puts your organization at risk of being left behind as your peers and competitors enhance their workplaces to keep pace with the heightened expectations of today’s (and tomorrow’s) workforce. While cybersecurity, governance, regulatory compliance, and other facets of avoiding risk make it tempting to go slowly, the rapid changes in our world require some purposeful initiatives to make a real impact on what working at your organization means to your employees. 

GPT-4 impact on advertising and creativity. Commentary by Manish Sinha, Chief Marketing Officer at STL

GPT-4 changes everything. Put simply, it can bring ideas on cocktail napkins to life! And this provides a snapshot of what AI could soon become for marketers. The use of advanced multimodal capabilities means it can process and analyze data from text, audio, and images etc – simultaneously. It is an even more powerful language model than its predecessors and will elevate existing use cases,  create and inspire new ones. Imagine a mix of stable diffusion, dramatron, musicLM all merged with the internet’s data and avatars. Well imagine no more, it’s within touching distance. GPT-4 can even take a question or input from writing on a napkin and output an image and a script. It has already created a website from a paper sketch and recreated a game of ping-pong in less than 60 seconds! The day is not far when we can see it become fully creative. Should marketers be worried and will AI ‘creativity’ cut down human talent? Isn’t that question getting tired? I believe AI is already there, and the best creators will utilize AI as a resource to take their creativity and ingenuity to the next level. This is one of the coolest things I have seen and excuse the cliché – but the possibilities are truly endless. Fasten your seat belt, creative marketing is about to change forever!

How APIs will drive democratization of data. Commentary by Jason Hudak, VP of Engineering at Rapid

APIs make it easy to adjust, transform, enrich and consume data – traditionally there was a need for hundreds of highly paid engineers to manage the data and data scientists were needed to understand algorithms. In 2023, we are seeing a shift towards API technologies managing data as a way to gain insights and also control data related costs which means people will no longer need to have highly developed engineering skills to harness the power of data.

Insight on ChatGPT Implementation. Commentary by Hans de Visser, Chief Product Officer at Mendix 

Over the past few years, we’ve seen the maturation of AI technology and we have adopted AI in two ways in our platform. First, we offer a set of AI bots for AI assisted development almost working as a peer for pair-programming. And we have made inclusion of cognitive and machine learning services to make smart apps a low-code experience with ML Kit. The rise of generative AI technologies such as ChatGPT will fundamentally influence software development. Imagine user stories being turned into working software through chat-like conversation where the developer can interactively adjust the application’s logic and look and feel. We’re prototyping such use cases as we speak, but it will take some time before it’s robust enough for production use. In our company we don’t see pushback. Our developers see it as an opportunity rather than a threat. Considering the word-wide developer shortage, these technologies can help alleviate the pressure to meet business demand for software.

The Biggest Challenge for GPT-4 Will Be Data Infrastructure. Commentary by Tony Pialis, cofounder and CEO of Alphawave Semi

We’ve already seen performance, latency and cost challenges in running AI software – with pundits estimating that training GPT-3 alone to have hit $4 million. While it is crucial to deliver massive computing power to run wildly popular applications like ChatGPT, compute is only one piece of the puzzle. High-speed connectivity solutions are critical to quickly and efficiently move the enormous amount of AI-generated data within and across datacenters. Applications like GPT-4 consume 100x more data, which requires a comprehensive focus and investment in data connectivity infrastructure. As companies including Google, OpenAI, and others double down on generative AI, accelerating data connectivity bandwidth through advances in semiconductor technologies will help to unlock the full potential of generative AI – making it universally accessible to organizations and consumers alike.

The fine line business leaders must walk in the era of generative AI. Commentary by Steve Mills, Managing Director and Partner, Chief AI Ethics Officer, Boston Consulting Group

AI is taking its place as an important tool in companies’ strategic arsenal, but the technology gives the C-suite plenty to worry about. An evolving regulatory system is proposing heavy fines for AI failures, and experimentation with AI often brings unintended consequences for individuals and society. Even if a company is not deploying their own AI systems, buying AI-embedded products and services from a vendor still poses risks. And employees are beginning to interact with generative AI in their daily work, introducing new complexity that exposes the business to even more risk. Leaders need to manage this risk while still enabling and encouraging experimentation with Generative AI technologies. Generative AI introduces a new set of risks such as capability overhang, hallucinations, questions over IP ownership, and the potential for disclosure of sensitive information. These risks are not theoretical. Countless examples already exist, showing how these manifest in the real world. This is complicated further by the way in which this technology has democratized AI. Shadow AI – the AI use cases developed by teams outside of normal processes and governance – has already been a challenge for companies. Generative AI is leading to instances of shadow AI to an extent and magnitude we’ve never seen before. Suddenly, AI can originate from anyone, anywhere in the organization. It can appear extremely rapidly, and it is difficult to detect. Leaders must quickly get their arms around these shadow uses to minimize risk. To that end, one of the most important things for CEOs and those in the C-Suite to keep in mind as AI – especially generative – proliferates across industries and organizations is to stand up Responsible AI (RAI) structures and processes. This must be a priority, and leaders must take a top-down approach to investing in and implementing these practices. Notably, while investment in RAI must be top-down, the democratization enabled by Generative AI means that leaders must also be sharply focused on embedding RAI into corporate culture from the bottom-up. CEOs must work to ensure that a large employee awareness campaign follows implementation of guidelines. It’s incredibly important for those at the top to balance their eagerness to put new generative AI tools to use, with the obligation of designing and deploying AI ethically and in-line with company values. Only then can we begin to facilitate and encourage more confident experimentations with and applications of this technology, knowing that the right structures are in place to avoid unwelcome surprises as a result of unanticipated risks.

Address data quality is more important than you might think – impacting on your decision making and business operations. Commentary by Berkley Charlton, Chief Product Officer at Smarty

In the current economic climate, there’s a lot of uncertainty. Budgets are tight and people are finding themselves having to do more with less. Due to this, having comprehensive, accurate address data is key to improving operations, decision making, and retaining customers. For instance, if a retail organization is finding themselves with a lot of late deliveries and missing packages due to poor address data, that will negatively impact CX and reflect poorly on the organization. With good quality address data, you as a business know that all of the addresses are true and real which will help cut down on delivery challenges. Organizations need to focus on optimizing their processes and identify ways to run a tighter ship, and better address data could be the difference between weathering the storm and sinking. Doing so can ultimately lead to reduced costs, liability, inefficiency, and complaints.

IBM Acquires Ahana. Commentary by Starburst’s CEO Justin Borgman

IBM is clearly making an investment in the growing data lake analytics market. More companies are moving away from legacy data warehousing models to storing their data in lower cost object-storage, and putting flexible analytics solutions on top to analyze the data in and around their lake. Trino (formerly known as PrestoSQL) is the leading data lake analytics engine, and this is further validation of this more modern data lakehouse approach.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW