Heard on the Street – 12/28/2023

Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Response to OpenAI lawsuit. Commentary by Dr Yair Adato, CEO of BRIA

“As another household name sues OpenAI for copyright infringement, it is a disappointing setback in what has otherwise been a cautious, but steady move towards progress. Companies which deploy OpenAI’s technology will be watching this case unfold and wondering if they might also encounter the same legal pushback. To reassure enterprise and the public, we as an industry need to double down on our efforts to safeguard the rights of content creators. First and foremost, this means ensuring the technology which underpins generative AI is made open and available for public consumption, this level of transparency will remove the possibility of misappropriating content.

Recent developments, including OpenAI’s partnership with Axel Springer, suggest a positive direction of travel with regard to intellectual property rights In AI, and so to continue down this road, and avoid improper repurposing of content, a deluge of litigation and devastation of public trust in AI, we must set clear parameters for how content can be collated, and standards from which we can all hold ourselves against. Whether we are developers, practitioners or businesses working in AI, we are all equally accountable for ensuring a responsible AI future.”

Applying a Customer-First Approach to Effective AI Product Strategy. Commentary by Andy Boyd, Chief Product Officer at Appfire

“Artificial Intelligence (AI) is, arguably, the most disruptive technology of the current decade; and, in real-time, we’ve witnessed companies across industries leverage it to improve customer experience and optimize the way teams work. In today’s digital age, where companies have access to an abundance of data about customer habits, it’s up to product teams to stitch this useful data together and distill the signal from the noise. By first gaining a deep understanding of customer challenges, product leaders can then provide products that use AI in a meaningful way – products that quickly and accurately solve customer problems in new and innovative ways.

Chatbots are one example of how companies are using AI to both identify and assist with customers’ needs. They are an efficient and accessible solution for serving customers at any time of day, and they’re one of the most optimal AI use cases for effectively supporting a positive customer experience. In the same vein, in instances where customers are connected to contact centers, organizations have begun leveraging AI to mine data so that information is readily available to agents, allowing them to resolve customer issues more swiftly. Chatbots and data mining, however, are just the tip of the iceberg for AI.” 

Risks that come with data hoarding and ways organizations can prevent this.  Commentary by Terry Ray, SVP Data Security GTM at Imperva

“2/3 of the average organization’s current data is either no longer useful or wasn’t useful in the first place. This process of ‘Data hoarding’ opens up organizations to a multitude of risks. When collecting data, we’re often told that quantity equates to value and all numbers must be considered in order to make informed decisions. However, over-collection, storing old data, and gathering irrelevant data actually creates problems, rather than solving them. Massive stores of unused and unmonitored data are potentially of huge value to attackers, potentially leading to identity or IP theft and business reconnaissance to find a route to their prime targets. 

There are a few ways to help prevent organizations from developing hoarding habits:

  • Have a clear data strategy: A robust data strategy should answer the crucial questions about data – Do we need it? How will it be used? How long do we need to keep it? Where does it need to be kept? And when and how will we dispose of it?
  • Classification and discovery are key if you’re not going to secure everything: Regardless of data’s value over its lifetime, organizations tend to take either a broad stroke approach by applying security controls equally across all data or take a selective approach applying controls on regulated or critical data only. The right approach is up to every organization and their priorities, but those opting for selective controls require thorough data detection and data type classification in order to accurately target where to apply security technology investments.
  • Take the easy wins: Organizations should take every opportunity to reduce the size of their hoard – for instance, making sure that waste basket folders that could contain years’ worth of emails or other files are permanently deleted.”

AI’s impact on the enterprise software business. Commentary by Alex Rumble, SVP of Product Marketing at IFS

“Generative AI has the potential to completely remake how enterprise software companies operate, with both internal and external impacts. Assessing where to focus and aligning upskilling to the pace of adoption is fundamental to reaping any benefits that go beyond isolated pilots to make a positive organizational-wide impact.

Undoubtedly, AI will accelerate the time to market of any capability and research already suggest that generative AI can help software engineers develop code up to 45 percent faster and generate code documentation up to 50 percent faster. From a productivity standpoint, this is clearly a huge step forward and provides the agility needed to respond faster to customers’ needs created by market and macroeconomic dynamics.

But for the enterprise software industry deploying AI alone is not enough – it has to go hand in hand with end-to-end organizational thinking and readiness assessments as well as clearly defined financial return. AI will enable incredible strides to be made in the development of enterprise software; success
however is dependent on the customers’ ability to adopt, absorb and consume it… and that may open the door to new services needed to ensure value realization.”

How AI is shifting the standard for DevOps processes. Commentary by Puneet Kohli, President, Application Modernization of Rocket Software

“A recent industry survey found 58% of IT leaders consider DevOps to be a priority focus area. DevOps tools eliminate the silos between the development and operations teams, empowering IT teams to evolve and improve products at a faster pace than those using traditional software development. DevOps helps with automating workflows, reducing errors, increasing productivity, and fulfilling compliance requirements—all while meeting the needs of an organization’s specific environment. 

But how can companies get the most out of their DevOps? By leveraging the tech on top of everyone’s mind these days: artificial intelligence. By incorporating AI tools into DevOps workflows, companies can maximize the benefits and reduce the time needed for certain tasks, empowering IT professionals to focus their time and attention on more creative and strategic responsibilities. Most talked about form of AI is generative AI, which applies algorithms to analyze patterns in datasets to then mimic style or structure to generate new content. What’s more beneficial to IT professionals at this moment in time is predictive AI, which uses advanced algorithms based on historical data patterns and existing information to forecast outcomes in order to predict customer preferences, market trends, and provide valuable insights for decision-making.

Predictive AI can analyze and predict future infrastructure needs, helping teams scale up faster and allocate resources appropriately. Using historical data and leveraging AI technology, companies can make predictions about future outcomes, find patterns and identify risks. This accelerates the application development through the DevOps pipeline, ultimately reducing the time needed for delivery. 

Another way to leverage AI with DevOps is automated testing. Testing and process discovery are evolving into more than standard steps in the DevOps process. Because test cases can be easily built with intelligence testing and shared early in the CI/CD process, automated testing can be used to look for superficial bugs. This frees up testers to focus on higher-level and quality issues, spend more time figuring out what needs to be tested and build more comprehensive tests cases. With automation, the options for where, when, and what DevOps teams can test is expanded, and tests can be run during development, not just after. 

The future of DevOps is a combination of new and established technologies to accelerate innovation. When leveraged the right way, AI can be a game-changing asset for companies who are prioritizing modernization and efficiency.”

Faster insights, agility, and improved AI matter now more than ever. Commentary by Ram Chakravarti, CTO, BMC Software 

“For executives out there wondering how to move forward with AI, three main points can help guide the process: First, AI and big data are inextricably linked. Second, it’s still early days for large-language models (LLMs) and they are best deployed in domain-specific models. Third, the absolute key to extracting value from AI is the ability to operationalize it. 

AI and data are in a cosmic dance. Companies have so much data, they don’t know what to do with it. For AI to have value, it needs to be trained on high quality data sets. For many use cases, the quality of the data sets matters just as much as the volume of data. At the same time, data volumes are so large that organizations can’t unravel meaning from their data without AI. AI and data are intricately intertwined, one unlocking the value of the other.  

LLMs are still in a nascent stage. Most organizations should look to apply AI to specific use cases, using domain-specific models that can provide immediate value. Further, they should team up with strategic partners – software vendors and systems integrators – from concept through implementation, and value realization. Finally, they should ensure that the solution addresses all the elements of risk – security, accuracy, quality, privacy, biases and ethics – so that it is viable for operationalization. Company-wide transformation isn’t going to happen overnight. But organizations can look for well-defined projects that can generate success and provide their teams the experience they need for future iterations.  

Operationalizing Innovation. So what does a successful marriage of data and domain-specific AI look like in action? Let’s consider IT operations and service management to illustrate the concept. On the IT operations side, organizations have a large volume of data – metrics, logs, events, traces, network, storage, application performance data and cloud monitoring data – extracted from a variety of environments. This data can be linked to the service context of the business – tickets, downtime, and maintenance requests. This marriage of service and operations data has become known as ServiceOps, and it’s commonly used to drive collaboration across the organization, automating routine tasks and gaining advanced warning of disruptions. By training and fine-tuning an LLM on ServiceOps domain-specific data organizations can identify patterns and generate previously unobtainable information such as resolution insights, business risk prediction and so much more.” 

AI is Revolutionizing Performance Testing. Commentary by Stephen Feloney, VP of product, continuous testing at Perforce

“Since the rise of ChatGPT last November, it has dominated nearly every conversation in the tech space. Artificial intelligence (AI) is expected to transform every aspect of software development, and performance testing is no exception. If developers and engineers don’t embrace AI, they risk falling behind their competitors. Gone are the days of relying on data scientists to build and deploy AI models – today’s AI tools have made the technology widely accessible, allowing teams to implement AI products and services swiftly.

There are many benefits to utilizing AI in performance testing. It can improve quality, ensuring the success of the app’s production; it can increase efficiency, decreasing testing time and eliminating manual errors; it can democratize testing, allowing users with varying skill sets to run performance tests; and it can provide confidence, supporting all developers with the tools they need to improve their testing abilities. 

Although it can be daunting to leverage AI when there still seems to be uncertainty around its use in testing, there are ways to help apprehensive testing teams get started on their AI deployment journey. The first step is to provide education around the different AI tools available for use and what they offer. Next, think about how AI will benefit the team and plan its implementation strategically based on what’s uncovered. One simple way to get started with AI is to incorporate automated tools into the testing cycle, which can free up testers’ time for more complex cases. With proper education and strategic insight, there is no reason for teams to shy away from AI, especially as its adoption gains popularity across industries.”  

Logistics optimization could increase market concentration if small businesses don’t act now. Commentary by Asparuh Koev, Co-Founder and CEO of Transmetrics

“Logistics network optimization powered by the latest technology can give large businesses with access to data and resources a significant advantage over smaller firms. Within the warehouse, robots and RFID technology aid in inventory management, while outside, telematics and IoT in trucks and containers enable real-time shipment tracking and monitoring of road conditions and equipment. These devices help provide accurate data capture to fuel advanced analytics software and enforce data-driven decisions.

Empowered with advanced data analytics, industry giants are forming strategic alliances with little financial risk. While Kuehne+Nagel and SATS work together to optimize ground handling, improve shipment visibility, and boost cargo processing speeds, these large deals also stir apprehensions about the implications for smaller players in the market.

However, it is crucial to emphasize that logistics network optimization isn’t an exclusive privilege reserved solely for big players. In this era of digital transformation, small and medium-sized enterprises (SMEs) have a golden opportunity to level the playing field by proactively investing in the available data infrastructure and resources.

By embracing innovative data analytics technologies such as digital twins-based scenario planning, SMEs can simulate the outcomes of strategic decisions like asset repositioning or hub addition and removal. Meanwhile, with AI-powered demand forecasting, they can optimize short-term processes, including inventory management.

Whether it’s strategic partnerships, investment in automation, or relocating hubs, data analytics is enabling SMEs to fine-tune their logistics processes, unlocking cost savings and boosting efficiency. These enhancements give them a palpable edge, allowing them to seize a more dominant market share.

The key lies in recognizing that data-driven decision-making is no longer a luxury but a strategic necessity, irrespective of a company’s size. Logistics teams can use historical data to identify future trends and real-time data to analyze the current landscape and adapt swiftly, if necessary, maximizing each company’s resources and streamlining processes for the greatest efficiency. Then, they can look at extra capacity and areas of growth.

Ultimately, the concerns surrounding market concentration should prompt a collective call to action for all stakeholders within the logistics industry. By democratizing access to innovative tools and fueling them with accurate data, SMEs and industry giants alike can optimize logistics networks, catalyzing healthy competition and sustainable growth across the entire business spectrum.”

Role of AI in Improving the Digital Employee Experience. Commentary by Khadim Batti, CEO of Whatfix

“Artificial intelligence serves as a powerful tool to enhance the digital employee experience by supporting digital adoption platforms (DAPs). These platforms assist software users in maximizing productivity within their business applications.

AI can swiftly analyze vast amounts of data, leading to increased revenue, reduced costs, and mitigated risks. It is now employed to create a more personalized experience, making end users feel as if their applications are tailored to their specific workplace needs. AI helps users prioritize tasks and significantly boosts the efficiency of business workflows, automating up to 80% of steps in processes like bank reconciliation. Users don’t need to understand the intricacies of how AI accelerates these processes; they simply need to know what’s required for their jobs. For instance, AI can manage administrative tasks in a complex sales process or assist marketing teams with intuitive dashboards leading to solutions.

Furthermore, a digital adoption platform (DAP) provides a unified interface, integrating AI capabilities across all applications. This approach eliminates the need for users to grapple with steep learning curves associated with different AI apps. The DAP simplifies the complexity, delivering a seamless interface for all applications.

AI enables the DAP to offer more contextual and personalized guidance based on each user’s job role, experience, skill level, and geographic location. For example, a smart AI agent can distinguish between a staff scientist inquiring about chemical properties used in the lab and someone from the procurement department seeking price lists for the same chemical.

Additionally, AI supports DAP analytics, making sense of user behaviors and providing insights in plain conversational language. A generative AI chatbot can seamlessly transition with the user between 10 or more applications throughout the business day. This way, users receive prompts to make better decisions and avoid repetitive mistakes. In essence, AI has emerged as a critical enabler, enhancing business software to be more user-friendly, productive, and efficient.”

User-friendly AI prediction. Commentary by Fabricio Inocencio, Head of Education & AI at Digibee

“In the last 30 years, humans have been interacting with software primarily through visual interfaces, and design in UX was crucial to create user-friendly interfaces that make the communication between machines and humans easier. This kind of interaction has been beneficial to developers as well, who were able to create new technology through low-code development solutions. 

In more recent years, with the development of AI, we’ve seen the emergence of sophisticated conversational UX, through voice-based communications tools like Siri and Alexa or LLMs like ChatGPT, and a new era of how humans interact with machines is already on course.

But when it comes to developing new technologies, AI will continue to help humans, but will also favor the machine side of the equation. As user-friendly interfaces and low-code development continue to evolve, we will see the rise of ‘AI-friendly solutions,’ developed collaboratively by humans and AI, optimized to work seamlessly with AI technologies. As a result, we will see more, better, and faster technology available for us; not only the way humans interact with machines will change drastically, but also how machines interact with machines, promising a dynamic and interconnected future.”

How OpenAI DevDay news effects enterprises, security, innovation, developers, and more. Commentary by Sebastien Paquet, VP of Machine Learning, Coveo

“OpenAI’s recent bundle of updates is very exciting, especially for developers and people working with LLMs. The biggest and probably most eye-catching is the announcement of GPT-4 Turbo, with its improvements on the original GPT-4 model. If you look at it from the enterprise adopters’ view, technical improvements like the 2x rate limit increase and 2x-3x cheaper pricing will encourage adoption while the JSON mode will make it easier to integrate and interpret the answers in existing systems. The more recent knowledge of April 2023 is interesting, but is still not enough for enterprise. A scalable and secure retrieval system remains an essential part of an enterprise question answering system or assistant – enterprise content changes often and grounding the LLM in the more recent content remains a necessity. OpenAI’s new GPT Builder (with a monetizable GPT Store) and Assistants API is also very alluring to developers since it will make it easier to build advanced GPT-4 powered assistants. 

Overall, OpenAI’s latest updates demonstrate an impressive speed of innovation and makes it easier and faster to build applications that understand human language and can integrate with existing APIs to make actions. This holds great potential for developers and the regular user, who will now have greater access to AI technology. This also raises concerns about the impact on other AI startups and enterprises that build applications that are easy to replicate. These organizations will need to build a moat to protect their business by finding differentiators that are not easily replicated.” 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW