Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
OpenAI facing a class action lawsuit due to web scraping. Commentary by Vered Horesh, Chief Strategic AI Partnerships at Bria
“The advancements in Generative AI present immense opportunities for both businesses and individuals. Yet, the crucial issue of data sourcing for this technology cannot be overlooked. Ignoring the provenance of datasets and disregarding internet scraping practices can lead to legal, privacy and safety drawbacks. In our collective interest, we must advocate for the responsible procurement of data powering Generative AI, without stifling human creativity.
If we diminish incentives for human original expression, what substance will be left to fuel AI? It is imperative to establish legal and business frameworks fostering an environment where human creativity and progress not only coexist but thrive. As demonstrated by recent market trends, viable alternatives exist that respect copyright laws and personal privacy, indicating the potential for a sustainable data economy. Bria AI has been staunchly supporting this ethos since its inception. We are encouraged to see industry giants, including Adobe, joining the movement towards a sustainable AI ecosystem.”
Checking Data Quality Before Diving Headfirst Into AI. Commentary by Vijay Raman, Head of Product & Technology, at ibi
“More than ever, organizations need accurate, reliable data. Luckily, the explosion of Large Language Models (LLMs), coupled with rising cloud costs, has highlighted a renewed focus on data quality. Artificial intelligence (AI) is only as good as the data upon which it’s based, and data at its best quality opens up benefits that can permeate throughout a company. For example, quality data can act as a kind of firewall that prevents other sectors within an organization from making business decisions based on incorrect information. Quality data can also help to improve customer relationships. With clean, accurate and actionable customer data, business leaders can better curate experiences, allowing organizations to provide frictionless service. As organizations improve their data quality, data compliance should also be a consideration. Data regulations and compliance laws are constantly changing, and sound data governance processes, backed by quality data, can make the process of staying current that much easier.
There are a few steps data specialists and IT professionals can take to enhance their data quality. First, data personnel should prioritize which data is most important for the organization. This includes the data that is necessary for regulatory compliance as well as for key decision making. It’s important to ensure that this data is accurate, consistent, and complete so that business decisions, especially those that directly affect customers, aren’t made in error. Next, data personnel can take time to create data standards that inform how the organization should manage data. This means implementing rules for cleansing, deduplicating, and formatting data as close to collection as possible, which will help to promote effective data use throughout a business. Lastly, IT professionals must create methods for regular data inspection and intervention, since prioritizing data and creating business-wide standards for enhancing data quality is not a one-time process. As businesses lean more heavily on LLMs to make business decisions, it’s important to remember that these models can only be effective with quality, actionable data.”
What do Congress and the general public need to consider regarding AI standardization? Commentary by Prashant Bhuyan, founder, CEO and chairman of Accrete
“Senate Majority Leader Chuck Schumer recently presented The SAFE Innovation Framework for AI policy and called upon Congress to act quickly, bypassing the slower and more traditional process of committee hearings. I’m glad the urgency and necessity of moving fast is being recognized and that the framework includes subjects of great importance, including security, accountability, and explainability. It’s crucial that Congress also address bias. Generative AI models learn from unlabeled data created by people and other AI. As such, AI has the potential to amplify bias. It’s of critical importance that AI-generated content can be attributed to ground truth so as to reveal the extent of bias through explainability. AI that reasons from ground truth in a transparent way will enable a utopian outcome in terms of giving humans superhuman productivity. AI that opaquely amplifies bias will create reasoning vulnerabilities that will be exploited by bad actors and could potentially destabilize society. Creating standards around bias will go a long way toward mitigating the negative possibilities of AI and creating user trust, which is beneficial for any government agency or commercial enterprise using AI in the long run.”
AI in HealthTech. Commentary by Christopher Day, Rally Visionary and Elevate Ventures CEO
“The widespread adoption of healthtech devices has generated a massive surge in healthcare patient data. These devices, whether prescribed by physicians or embraced by consumers as fitness trackers, produce an immense amount of valuable information that can be used to inform treatment. At this point, Generative AI is a necessity to make sense of the ever-growing volume of health data.
The next phase of healthtech innovations must be able to leverage AI natively, and they will only reach the market through close collaboration with software and materials engineers, hardtech experts and clinicians. This multidisciplinary, cross-sector approach to product development will define whether the next healthtech company makes it to a minimum viable product, let alone wider market adoption.”
Reaction to Salesforce GPT, generative AI disruption. Commentary by Joe Harouni, Connected Commerce Lead, Avionos
“While AI has had a perennial slot on disruptor lists over the past decade, it was not seriously considered for the majority of companies because of well-known barriers to entry: ensuring data quality, the effort and expense of training your own model, and hiring the right talent to deliver results. This made the path to value fraught. Today, AI vanguards and established platforms have lowered those barriers by abstracting the complexity and offering consumption models or purpose-built AI. Business leaders will place small bets on incorporating ancillary ecosystem players (e.g. adding Salesforce Commerce GPT to an existing Salesforce install), but doing so in a way to build and showcase the business case. This paves the way for accelerated and expanded use of third-party models or the decision to bring more capabilities in-house.”
Optimizing product content for ChatGPT. Commentary by Randy Mercer, CPO, 1WorldSync
“Search engines with ChatGPT extensions use countless online sources to generate answers about consumer products, including consumer and expert reviews and niche subject matter sites. To take advantage of large language model (LLM) integrations, brands must think beyond the product detail page (PDP) and ensure they have balanced content marketing and consumer outreach strategies for their products. In other words, if you want your brand to show up in results, focus on public relations. Make sure reputable influencers in your space are writing about your products and the unique qualities that elevate them above your competition.”
How AI and ML-driven Predictive Intelligence Can Optimize Tech Sales Teams. Commentary by Dana Therrien, Vice President, Revenue Operations and Sales Performance, Anaplan
“The current macro-environment has challenged sales teams across the tech sector to do more with less, identify new revenue streams, and create more agile go-to-market strategies. Revenue leaders need to be able to pivot quickly and adjust territories, forecasts, and quotas to meet the market and keep high-performing sellers engaged.
In this market, advanced technologies like AI, ML, and predictive analytics can make a big impact on tech sales teams. Revenue leaders can augment existing sales data with predictive attributes and third-party data, then use that data to score and segment accounts based on their profile fit and buyer intent. Based on these predictive insights, sales leaders can create more realistic territory and quota models that keep their pipelines healthy and sellers motivated, and decision-makers can identify ways to further optimize their sales resources to maximize effectiveness. The result? Stronger, more targeted deal cycles, increased contract value, and more active and engaged sales teams.”
Why greater transparency is Needed in the AI arms race. Commentary by Miles Ward, CTO at SADA
“In the ongoing AI arms race, knowledge sharing has been all but ignored. Companies want to remain competitive and ahead of the curve when building AI models, and doing so means protecting proprietary information, such as details on the data powering these models and how sensitive information is secured. But as AI training costs decrease and models continue to get more sophisticated and complex, a lack of transparency in the development process could bring negative consequences and real-world impacts.
To ensure this future doesn’t become a reality, it’s important that the industry also creates a robust infrastructure for AI information sharing. Fortunately, there is a potential solution: patents. By offering the ability to patent AI models, companies can receive exclusive rights to their data allowing them to either solely use their data or sell it to others. Not only do companies protect their data, but since the information is publicly available, it can also spark innovation while also building transparency on how AI is being used. A combination of patent applications and regulatory oversight will help pave the way for more transparency across the AI ecosystem, ensuring a safer, more ethical rollout of high-risk AI tools.”
Why Hybrid Data Warehousing will Redefine Enterprise IT Infrastructure. Commentary by Lewis Wynne-Jones, VP of Product at ThinkData Works
“When it comes to data architecture, the past ten years have been marked by a consistent shift from centralized to non-centralized thinking. The emergence of the cloud in 2012 was a push towards decentralization that was then countered by the advent of the data lake a few years later. Now, concepts such as data fabrics once again prioritize federated data warehousing while still fundamentally requiring centralized data access. The good news is that technology is finally coalescing around the required use case, and it is now possible to maintain a federated storage model without sacrificing the centralized discoverability and use that’s required for a modern data-driven organization.
The twin pillars of this new approach are multi-cloud warehousing and data virtualization. The first of these is not a technology approach, rather a pragmatic one. As cloud providers continue to dominate the market, it’s a good idea for most companies to start using trusted cloud providers like Snowflake and BigQuery alongside their old warehousing tech. In many cases, it will make sense to migrate legacy data from old servers into these new warehouses, where you can enjoy the benefits they provide. That said, a full-scale lift-and-shift is often not necessary, as there will likely always be some data that you want outside of the cloud. Whether it’s user data that you want to keep on-prem, or low-velocity data that you want to keep in an open source warehouse, the reality is that no one warehouse is a panacea: each one has benefits and drawbacks.
By using cloud warehousing for the data you want your team to use, on-prem servers for sensitive data, and open source databases for everything else, for example, you not only unlock real cost savings but achieve a more streamlined approach to warehousing; one that uses best of breed technology for specific applications.
When that’s done, the trick is to virtualize the data into a central environment to increase governance, role-based access, and efficient streaming of the data into downstream processes. This approach will, if applied correctly, give you the security of decentralized warehousing, the benefit of cloud services, and the strategic advantage of increased data access and governance from a central management layer.”
What could be next for AI regulations across industries. Commentary by Shameek Kundu, Head of Financial Services and Chief Strategy Office of TruEra
“The meteoric rise of Generative AI has led to widespread demand for *some* regulation of AI. However, there is little agreement on what this might look like. Our hypotheses: (i) AI Regulation will remain messy and unwieldy, at least in the next 3-5 years; (ii) Regulators will be forced to expand their focus when thinking about AI risk. In addition to existing concerns like unjust bias, aspects like transparency and IP rights on training data (EU AI law) and market concentration in foundation models will become important; (iii) For the most part, AI developers and adopters will have to deal with a national or regional patchwork of regulations. International agreement on rules will remain elusive, except perhaps in niche areas like AI in the battlefield; (iv) Much of the early enforcement or litigation action will be on the back of existing laws and regulations, such as those on anti-discrimination, intellectual property rights and privacy, or industry specific requirements such as those in insurance, banking and medical devices; and (v) An ecosystem of players – standard setters, auditors, law firms, testing tools and perhaps others – will emerge within the next 2-3 years to help test and certify AI applications.
Amidst such uncertainty, “waiting for clear regulations to emerge” is not a viable strategy, at least for large enterprises. The risk of reputation damage, the need to retain customer and employee trust, and the broader focus in enterprises on ESG objectives will force most down the route of their own internal guardrails, in anticipation of what their regulators produce in the coming years.”
Unanalyzed data is the secret to business success that no one is talking about. Commentary by Bayley Fesler, Director of RevOps at Xactly, and Annie Jones, RevOps Business Partner at Xactly
“Data is in no short supply at most organizations, but extracting value from the data that can be turned into strategy is not always easy. Take sales data for example – sales leaders have access to data on virtually every detail of a deal, from the number of days it takes to close a deal to the average price per product. But these are just numbers unless sales leaders have the right tools to translate data into a narrative about their business.
This is especially true in sales forecasting. With so much data readily available, forecasting should not be a guessing game. AI-driven forecasting tools transform sales data into easily digestible metrics and visuals that help sales identify what is and is not working in their sales cycles. This information, combined with increased collaboration with cross-functional partners, informs sales leaders on which levers to pull to improve their overall business outcomes, thus combining the art of sales with the science of forecasting. The answer to growing organizational revenue is in the data, you just have to know where to look.”
AI is transforming healthcare, but we can’t forget the human element. Commentary by Leslie Pagel, Chief Evangelist Officer, Authenticx
“In the 21st-century digital world, the healthcare industry and its customers expect seamless, personalized and efficient interactions. But the healthcare journey is long and complex, involving multiple touchpoints with different departments and providers, creating barriers for customers to get the information they need. The growth of contact centers, chatbots, electronic health records (EHRs), social media and other factors have significantly increased the volume of communication within this industry, making it difficult for organizations to keep pace and meet patient expectations for ease and speed.
The vast amount of unstructured data healthcare organizations generate makes it virtually impossible for healthcare leaders to hear and resolve customer pain points. Industry-specific artificial intelligence (AI) is the solution. AI solutions built specifically for healthcare allow healthcare leaders to harness a source of data generated by customer interactions. That data can then be organized for purposes of listening to the patient journey, delivering improved patient outcomes and strengthening customer loyalty. Integrating AI into healthcare isn’t about replacing humans with machines but enabling and empowering human decisions. Complementing humans with machine support — and vice versa — is the core AI differentiator.“
Predicting AI’s impact on the marketing landscape in the latter half of the year. Commentary by Ariel Geifman, CRO at Dealtale
“In previous years, marketers would have to segment their customers and then send each segment a message that could resonate with them – but with AI, we’re going to see those segments narrow down to the individual level, where each customer can be sent their own unique, personalized message. Automated tools will allow marketers to communicate value to the consumer based on their own customer journey, preferences, purchase history and more. Customers will get the feeling that the vendor organization ‘knows’ them on a personal level – even if it’s fully generated by AI.”
The Good and Bad Sides of AI in Security. Commentary by Reed Taussig, CEO of AuthenticID
“AI will completely reshape both sides of the cybersecurity coin. On the one hand, businesses will have to protect themselves from an onslaught of AI-powered attacks. Scenarios such as bad actors creating counterfeit driver’s licenses to “verify” their identity or generating a deepfake to spoof authentication will continue to be more common as AI evolves. Though, while recent developments have opened new avenues for security threats, the good news is that businesses can also rely on AI to better protect themselves and their customers.
AI is superior in detecting suspicious activity. For example, it can validate a document’s authenticity within seconds or identify when a person’s behavior seems suspicious or unusual based on their previous patterns. AI will be an essential element in their security toolkit as companies navigate how to keep security threats at bay. In fact, tapping into AI is the only way to stay one step ahead.”
Improving quarterly reporting processes. Commentary by Bill Koefoed, CFO of OneStream Software
“Producing timely and accurate quarterly reporting for external stakeholders begins with having efficient and effective internal management reporting processes that support informed and agile decision-making on a daily, weekly and monthly basis. Then it requires having the systems and processes in place that support timely and accurate reporting to external stakeholders on a quarterly basis.
If an organization is struggling to run critical financial and operational reporting and planning processes on spreadsheets and email, it may be time to upgrade to an advanced, cloud-based corporate performance management solution. These solutions integrate with and complement existing Accounting, ERP, CRM, HCM and other systems, support the collection and consolidation of financial and operational data to support management reporting and decision-making, as well as accurate and timely quarterly reporting and analysis to internal and external stakeholders.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW