Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.
Reaction to Google’s AI Model Gemini. Commentary by Astronomer’s CTO, Julian LaNeve
“Google’s clearly establishing themselves as a leader in the generative AI and LLM space. After releasing Bard, it seems they went heads-down to build their next generation of models, Gemini. It looks really interesting for a few reasons: (i) Gemini has different models built for different use cases; particularly, the Gemini Nano model purpose-built for specific tasks and lower-compute devices feels like a great idea as we start to see LLMs being used in more areas; (ii) The focus on multi-modality from the beginning means they can interact with different forms of content much more naturally than other approaches that I’ve seen which involve stitching together independent AI models purpose-built for specific content types; (iii) Their largest model, Gemini Ultra, is proclaimed to be the first to outperform experts on massive multitask language understanding tasks (MMLU tasks). This is really cool – generally the way you get great accuracy is by fine-tuning models for specific use cases, so the fact that you get a much larger base of knowledge “out of the box” with Gemini Ultra is amazing.
These massive foundational models typically take months to train, so Google’s foresight into multi-modality from the start is impressive.”
The Commodification of Design with the Rise of AI. Commentary by Won J. You, Founder of Won J. You Studios
“As AI tools continue to evolve, it will not only change the way that design is done but also how it’s valued in the future. AI tools are already making their way into designers’ everyday processes. But with this newfound ability to create designs faster and cheaper will come a variety of unintended consequences for the design profession. One of these potential drawbacks is the looming threat of the commodification of design. Free design resources, open source files, and templates on sites like Canva have already made it much easier for non-professionals to create high-quality designs for everything from business cards to websites and social media posts.
But with generative AI, design becomes universally available to anyone with the ability to write a prompt. In effect, the sudden rise of infinite design supply will reduce the value of design output to near zero. One of the competitive advantages long-held by professional designers has been their mastery over not only the craft, but also, the tools of the trade to execute their creative vision. With generative AI, this moat will practically be non-existent. And the proliferation of AI-generated designs will further diminish how people perceive and value design.
The production abilities of designers will no longer be a critical skill set, but rather, a person’s strategic thinking and creative vision will be what’s most valuable. In that vein, originality will continue to be sought after, but this will become the domain of a select group of elite designers, say the top 2%. This transformation will be akin to the rise of fast fashion. Low-cost designs will become mass-market, and AI-generated designs will become prevalent and disposable. Design automation will be just as disruptive to the number of design jobs as robots have been to jobs in manufacturing. Design trends and styles will become exceedingly transient and ephemeral. But just like haute couture and avant garde designs, trendsetters will still be needed to imagine novel ideas and forge new creative territories. But the number of these design jobs will likely be less than they are today.”
Leveraging AI to push forward sustainability goals. Commentary by Brianna Hogan, Chief Engineer, BrightLabs
“While not often acknowledged, there’s a clear connection between technologies and their negative impacts on the environment. For example, data is looked at as having no physical form, and no tangible impact on our environment, but it’s quite the opposite. Data centers have a significant impact on the physical world – with the worldwide consumption of electricity by these centers being more than the electrical consumption of entire countries. Even AI algorithms tend to be energy hogs, with one algorithm having the ability to generate the same amount of carbon emissions as 470 people taking flights from New York, all the way to San Francisco.
While AI can contribute immensely to the climate crisis, the technology can also be the most efficient solution – helping to improve energy efficiency through machine learning techniques such as distillation, fine-tuning and pruning. These techniques result in the same accuracy as before, just with less energy consumption and reduced cost – a win-win all around. Developing smaller, optimized AI algorithms can effectively help organizations meet their goals and more easily offset their own carbon emissions, helping to mitigate climate change.
Fixing the problem begins with the acknowledgement that it’s there. No matter your role, everyone inside an organization can help move the needle towards a greener future, like reducing the amount of data stored in the cloud by deleting content such as emails and pictures. For organization leaders, putting a focus on the investments made in the education, strategy, and execution of sustainable cloud and AI practices will be key as well. We’re increasingly feeling the effects of climate change, so the responsibility is on us to do everything possible to curb these emissions from the innovative technologies that are becoming a staple in business operations.“
How AI Reconnects Buyers and Sellers. Commentary by Tony Grout, Chief Product Officer at Showpad
“The existing gap between buyers and sellers is widening in our digital hybrid world. The internet was a game-changer, making tons of information readily available through company websites, online communities, review sites, and more. And now, the age of generative AI is upon us, bringing with it another major inflection point. Today, it’s easier than ever for buyers to access and process information when making purchasing decisions.
These advancements are increasing the disconnect between buyers and sellers, specifically how buyers self-serve for part of the buying journey and what value sellers bring to the table. This is especially true for companies that sell complex, high-value products that require in-depth evaluations by multiple stakeholders with lengthy buying cycles.
In sum, AI is forcing sellers to evolve; but it’s also empowering them to be successful in new ways. Sellers must now adapt to buyers’ shifting expectations. Buyers expect sellers not just to be a source of information, but to help them make sense of it in the context of their business and the challenges they’re facing. To succeed, sellers must show up as true consultants.
AI’s potential shines when it addresses tasks that are challenging for humans to do but simple for us to verify. This makes the power of AI evident in sales enablement, especially when it bolsters the value sellers bring to the sales process, strengthening buyer relationships.
The influence of AI extends to every facet of the sales enablement ecosystem. From reshaping information discovery and content creation to coaching sellers to deliver information more effectively, all while creating engaging experiences that build buyer trust and confidence, AI marks a new dawn for our industry – and one that brings buyers and sellers back together.
It’s my belief that sales enablement’s future rests on augmenting the power of human relationship building with the power of generative AI. As well as automating time consuming tasks, AI can augment seller capabilities to drive the ultimate buyer / seller relationships. Without such tools, sales teams risk missing big data insights from buyers. But with AI, informed decision-making is enhanced; and traits like empathy, imagination, and human inventiveness differentiate top sellers from the competition. From the disconnect to the reconnect, AI has the power to align buyers and sellers on an unprecedented scale, and the net benefit for business is almost unimaginable.”
Issues of data privacy concerning LLMs like ChatGPT. Commentary by Zuzanna Stamirowska, CEO and Co-Founder of Pathway
“Enterprise adoption of Large Language Models (LLMs) for real-time decision-making on proprietary data has struggled to take off despite the Generative AI hype. A key reason for this is over the legitimate data privacy concerns over sharing intellectual property and sensitive information with LLMs like GPT and Bard. In May, for example, Samsung Electronics banned all employees using Generative AI tools after sensitive internal source code was accidentally leaked by an engineer. It has also catalyzed governments across the world to deepen their focus on the regulatory concerns around AI, from the UK’s AI Safety Summit next month, to the EU AI Act (which we incidentally released code to help people query the 108 page document to understand what it means for them).
But this isn’t a viable long-term solution. LLMs present a massive opportunity for indexing and searching enterprise data, in both structured and unstructured sources, as well as for asking questions about current operations using natural language. So instead, enterprises need to be able to build LLM applications that they can trust, which can mean developing self-hosted private LLM applications, with the data staying secure and undisturbed in its original storage location. By ensuring application owners maintain complete control over the input data and the application’s outputs, it would open up the opportunity for LLMs to be used for use cases that draw on sensitive data and intellectual property. The challenge will be to ensure that AI operators are following similar safety frameworks to ensure their compliance with burgeoning regulations such as the EU AI Act. Such regulation is likely to set a global precedent that many countries will attempt to follow, and AI providers will have to comply to roll out their solutions in those markets.”
What are the three waves of AI and how can we prepare for the next wave? Commentary by Erkin Ötleş, AI Practice Lead at HTD Health
“The evolving landscape of Artificial Intelligence has been described as going through “three waves of AI.” Each wave brings to mind distinct definitions and applications. The first wave, described as “Classification AI,” involves the identification of previously unknown attributes of an entity. In healthcare, this translates to predicting if a patient is predisposed to future afflictions such as Sepsis or ascertaining the likelihood of infection in a hospital.
The second wave, known as “Generative AI,” revolves around the creation of new data given a specific scenario or prompt. In the medical sphere, this could manifest as the generation of physician’s notes using the information gleaned automatically from the conversation between physicians and patients. The final wave, termed “Interactive AI,” encompasses tools that continually engage with users and the environment to yield specific objectives. An example of this is a glucose control system tasked with keeping patients with type 1 diabetes blood sugars in a safe range.
These waves do not entail groundbreaking AI methodologies per se but rather the integration of established techniques with innovative user interfaces and new application areas. “Classification AI,” for instance, encompasses a range of techniques ranging from conventional statistical learning methods like logistic regression to cutting-edge deep learning models such as transformers. Similarly, “Generative AI” leverages various pre-existing techniques; classic tools like Markov models can generate new data through sampling. “Interactive AI” draws on tools like dynamic programming and reinforcement learning, which have facilitated human-robot interaction for decades.
These three “waves” describe AI’s application and user experiences instead of describing fundamentally new AI methods. However, as new application areas are established, new AI techniques and theory will be needed to ensure the safety and efficacy of these tools. These methods include developing better ways to understand the world around us and finding innovative approaches to organize and use the information we already have.”
AI is Still Too Limited to Replace People. Commentary by Mica Endsley, a Fellow of the Human Factors and Ergonomics Society (HFES)
“NVIDA’s CEO Jensen Huang declared that AI will be “fairly competitive” with people within five years, echoing the rolling “it’s just around the corner” claim we have been hearing for decades. But this view neglects the very real challenges AI is up against. AI has made impressive gains due to improvements in machine learning as well as access to large data sets. Extending these gains to many real-world applications in the natural world remains challenging. However, Tesla and Cruise’s automated vehicle accidents point to the difficulties of implementing AI in high-consequence domains such as military, aviation, healthcare, and power operations. Most importantly, AI struggles to deal with novel situations that it is not trained on. The National Academy of Sciences recently released a report on “Human-AI Teaming” documenting AI technical limitations that stem from brittleness, perceptual limitations, hidden biases, and lack of a model of causation that is crucial for understanding and predicting future events.
To be successful, AI systems must become more human-centered. AI rarely fully replaces people; Instead, it must successfully interact with people to provide its potential benefits. But when the AI is not perfect, people struggle to compensate for its shortcomings. They tend to lose situation awareness, their decisions can become biased by inaccurate AI recommendations, and they struggle with knowing when to trust it and when not to. The Human Factors and Ergonomics Society (HFES) developed a set of AI guardrails to make it safe and effective, including the need for AI to be both explainable and transparent in real-time regarding its ability to handle current and upcoming situations and predictability of its actions. For example, ChatGPT provides excellent language capabilities but very low transparency regarding the accuracy of its statements. Misinformation is mixed in with accurate information with no clues as to which is which. Most AI systems still fail to provide users with the insights they need; a problem that is compounded when capabilities change over time with learning. While it may be some time before AI can truly act alone, it can become a highly useful tool when developed to support human interaction.”
The EU’s AI Act is a victory for startups across Europe. Commentary by Victor Botev, CTO and co-founder of Iris.ai
“This is a victory for European startups across the continent. Large multinationals alone should never represent the needs of Europe’s AI ecosystem. These measures will ensure that we have a seat at the table and a hand in our own fate. Germany, Italy, and France’s insistence on nurturing and protecting homegrown startups has been critical to this breakthrough.
Going forwards, we must ensure these regulations don’t place an onerous burden on startups and open-source communities, limiting their ability to scale and compete with corporate behemoths headquartered in the U.S. As we move towards final drafts of the legislation, startups can help the EU get the balance right between safeguarding citizens and fostering Europe’s dynamic AI startup ecosystem.”
The EU AI Act | An Open Source Perspective. Commentary by CEO of OpenUK, Amanda Brock
“The political negotiations may be over, but many in the open source community will anxiously await the full technical detail of these regulations.
The agreed AI Act text isn’t available and likely doesn’t yet exist, but we’re told that there will be exceptions around “free and open-source software”. Unlike the effort put into defining AI (just look at the OECD rushing to create a new definition of AI to be incorporated into the EU AI Act) it is unclear what “free and open-source” means in the context of the Act in Europe and generally across the planet. That’s despite some massive benefits being offered up for “free and open-source AI.
As regulators scramble to understand AI and the impact of its inevitable opening up this is a global challenge. Some with a vested interest might suggest that it doesn’t matter. However, the impact is potentially huge as the Act is the first of many to offer exemptions and carve outs for “free and open-source.
Policy makers and regulators have generally adopted the requirement for open source software to meet an Open Source Initiative (OSI) approved licence, ensuring code meets the Open Source Definition (OSD) and enabling the free flow of the software for any purpose. This has not been the case for the AI consultations. A few wealthy companies with open source products and organisations claiming their AI is open have been the loud voices overshadowing those with deep open source understanding and community representation.
Regulators beware the potential for the terms “open source” or “free and open-source” to be used as a Trojan horse. Differing levels of openness have varying commercial impacts and community benefits. For example, Meta has a potential commercial benefit from the use of Llama 2, as users cannot entirely rely on the free flow of software given the licence’s commercial restrictions contrasting with the UAE’s Falcon LLM, licenced on the open source Apache 2.0 licence allowing a free flow.
All open AI is not created equal today and not all may merit the benefits intended for “free and open source software. Rarely has a definition mattered so much to the future of so many.”
EU AI Act Deal. Commentary by Dr. Kjell Carlsson, head of AI strategy at Domino Data Lab
“How does a fine of 7% of your global revenue sound to you? How about 3% or 1.5%? Those are the fines proposed in the new EU deal for leveraging AI for a banned use case, for not complying with the law, or for providing incorrect information respectively. To put it mildly, these fines are monumental, potentially devastating to any company forced to pay for them, and they are even higher than the fines for failing to comply with GDPR. Notably, they are also completely detached from any existing examples of damage caused using AI to date.
These fines may come into force as early as 2025, but the chilling effect that these regulations will have on AI investment has already started. Every organization with operations in the EU will be reviewing their existing AI investments, subjecting every new initiative to thorough scrutiny, and delaying every project with even a hint of regulatory risk. EU-based AI startups will be accelerating their plans to move outside the EU following in the footsteps of investor funding.
Whatever the merits of the provisions in the new agreement and – to judge from earlier drafts – many of the rules proposed will be sensible, they will be dwarfed by the negative effects from the cooling effect this will have on AI research, commercialization, and adoption by firms in the EU. Fear of these draconian fines will translate directly into fewer AI-based products and services, less competitive firms, fewer highly paid jobs, and fewer discoveries in science and medicine. Sadly, EU citizens are unlikely to be any safer from the misuse of AI, because most organizations that intentionally misuse AI, are illegal and care little about regulation.
Happily, companies operating in the EU can buck this trend. They can continue investing in and adopting AI methods with little to fear from the upcoming AI regulations and outcompete their stymied brethren. They just need to invest in the foundational capabilities for governing the design, development, deployment, and ongoing operation of their AI models and pipelines. Advanced AI organizations in highly regulated industries – like financial services, insurance, and pharmaceuticals – already do this today. They have been building the talent, processes, and platforms for governance, and invested in AI executives that can oversee the responsible use of AI. All firms should invest in these capabilities to leverage the benefits of AI, but for any EU firm looking to compete internationally, it is now vital for your survival.”
Thoughts on AI and adtech. Commentary by Husna Grimes, VP Global Privacy, Permutive
“Innovation, productivity, and creativity make working with AI advertising vendors more appealing than ever. But without a regulatory framework, publishers and advertisers must do their due diligence to select responsible, reputable, and ethical AI vendors. It’s better to ask questions about usage, privacy, and personal data now — rather than after the brand damage has been done.”
Google launches AI model challenging OpenAI? Commentary by Alon Yamin, co-founder and CEO of Copyleaks
“The announcement from Google regarding the release of their long-awaited Gemini AI model and its capabilities, particularly around code and how it appears to exceed the performance of GPT-4, is yet another example of the rapid acceleration of AI and its potential. While this degree of rapid innovation opens tremendous opportunities that the world over is still trying to figure out, it also reinforces the need for necessary safeguards to help ensure responsible adoption, transparency, and safety starting now and into the future. It’s encouraging to see that the lessons learned from the age of social media — itself a disruptive technology with minimal oversight to date — are starting to be applied around AI, with the release of Gemini and its sheer capabilities simply reinforcing this need.”
Apple launches MLX OS AI. Commentary by Amanda Brock, CEO of OpenUK
“A major tech company challenging the status quo of the AI marketplace with actual open source makes perfect sense. In the case of MLX we are looking at real open source – an MIT licence – meeting the open source definition and approved by the Open Source Initiative. It was only a matter of time before one of the key players would take this step following on from Meta’s almost but not quite open source Llama 2 LLM release in July; with Apple’s disruptive history of reinvention it is not entirely surprising that Apple has boldly gone there.
MLX is Apple’s strongest public bet on the open source technology that has underpinned much of its historic success.
It’s no secret that Apple has generated success from products based on open source software. But in recent years we have seen some strong open source engineers join the organization and a shift at the business to more open source around Kubernetes, Swift and WebKit.”
Amazon criticizes Azure business practices. Commentary by Mark Boost, CEO of Civo
“This public disagreement between the hyperscalers is nothing but a distraction from the big picture. Whatever each one may claim, the fact remains that the status quo in the cloud market is unsustainable and anti-competitive. We cannot have a situation where businesses using the cloud are hemmed in with opaque pricing, dauntingly complex services, and data egress fees that make it difficult to move to another provider.
The CMA investigation can be a turning point for the UK. We can start building an environment where any company can develop cutting-edge cloud services, and customers have the freedom to find the best solution for them. If we want to truly build a better cloud space here in the UK, we need to focus on addressing the issues that are key to customers. Predictable pricing, simplified services and ease of access can all contribute to a more competitive landscape where cloud realizes its full potential. While the hyperscalers are taking shots at each other, emerging cloud providers are rapidly stepping up to offer an alternative way forward to the hyperscalers.”
Make AI fairly cooperative rather than fairly competitive. Commentary by John D. Lee, a member of the Human Factors and Ergonomics Society (HFES)
“Jensen Huang, NVIDIA’s CEO, recently claimed that AI will be “fairly competitive” with people within five years. AI is already more than fairly competitive, but this might have little to do with its ultimate influence. AI has shown superhuman ability in domains as diverse as chess, law, medicine, and marketing. However, AI is unlikely to make games irrelevant or replace physicians. Instead, AI will transform jobs. Recent studies indicate that AI can increase productivity by 30 to 60%. AI represents a computing revolution like the graphical user interface and the internet. The promise of the AI revolution hinges on its ability to cooperate rather than compete with people. The AI revolution needs to extend the metaphor of computers as bicycles for the mind, where computers cooperate and amplify rather than compete and replace human contributions. A challenge to developing a cooperative relationship with AI is that despite its superhuman abilities in certain areas, it performs very poorly in others. Furthermore, this uneven performance is masked by a tendency to appear “truthy” without being truthful. It confidently relates its hallucinations in sentences that suggest confidence and competence.
The uneven landscape of AI capability is like bicycling down a winding mountain road filled with deep potholes in the dark–an exciting ride punctuated by dreadful surprises. However, just as cars didn’t need to outperform horses in every aspect to revolutionize society, AI doesn’t need to excel in all domains to be useful. The key lies in making AI a cooperative partner that conveys its uncertainty and illuminates the potholes. Just as with the graphical user interface and the internet, success depends on designing AI to amplify rather than compete with people. We need to make fairly cooperative AI rather than fairly competitive AI.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW