Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
U.S. Presidential AI Executive Order (various commentaries):
“We believe in adequate and effective regulation of AI. We applaud the Administration’s significant efforts to promote AI innovation, an open and competitive AI marketplace, and the broad adoption of AI. We are also pleased to see the support for open source and open science. We look forward to contributing to the process the Commerce Department is starting, looking at the importance and concerns around continuing to allow the open sourcing of model weights of foundation models.” – Ali Ghodsi, CEO and co-founder, Databricks
“AI safety requires AI governance, and the dirty secret in the AI industry is that the weakest link in AI governance is data pipelines. The manual bespoke AI data workflows used by most enterprises need to be redesigned and industrialized. AI governance for data requires data provenance, data documentation (especially semantics), role-based access control (RBAC) security, identification of downstream consequences of changes, version control, formal approval processes for all changes, and audit trails.” – Colin Priest, Chief Evangelist at FeatureByte
“Issuing watermarks on official federal agency content to prove authenticity is a necessary step to reduce misinformation and educate the public on how to think critically about the content they consume. One note here is that it will be critical that watermarking technology can keep up with the rate of AI innovation for this to be effective.
We are also pleased to see the commitment to protecting a competitive AI ecosystem, ensuring there is room for innovation at all levels and not just within the biggest players. A key part of this equation for AI application companies — as opposed to the companies building foundational models — will be to ensure that new developments come from a diverse range of models. It’s important to us that interoperability, a term I only see once in the initial fact sheet on the order, remains a key concept as new models rapidly arise. Without it, we’ll see the same walled gardens of the past and the competitive landscape will quickly shrink, limiting innovation.
How this order takes form in execution is yet to be seen, but we’re encouraged by the direction and acknowledgment that a diverse ecosystem is worth protecting.
I am pleased to see the government’s pointed interest in providing AI training for government employees at all levels in relevant fields. In the last year, we’ve seen companies experiment with AI, placing substantial importance around vendor selection and implementation. We’ve seen stories of misuse by employees and incredibly strong business cases of organizations who have used AI responsibly for better business outcomes. It will be crucial that the US government models responsible and effective use of AI to the rest of the country.
… this can be a transformative piece of technology, but it requires upskilling in AI literacy and setting standards for responsible use to be used well. It’s good to see this well-timed order creating a model for that at the government level.” – Timothy Young, CEO of Jasper
“President Biden’s plans to ensure that the United States leads the charge in AI regulation has left some questions for those working with open source models. Today, all of the best open source models are trained by big companies with enormous resources – but the increasing regulatory scrutiny on these models will almost certainly slow down the pace of innovation in the open source community. That means we’ll get fewer models to work with. It’s also unclear to what extent these new rules will apply to open source models to pre-train or fine-tune, as many companies are increasingly relying on open source models like Falcon, Mistral and Llama 2 to build proprietary models, fine-tuned for specific use cases. If these privately fine-tuned models need to be put through safety and security regulations, it will almost certainly limit organizations’ ability to rapidly build and deploy new models.” – Richard Robinson, CEO of Robin AI
“Today’s executive order (EO) represents the first White House driven policy tied to AI regulation, and is a substantial step towards establishing more guidelines around the responsible use and development of AI. While the impact of AI on society has been profound for decades and will continue to persist, the EO aims to ensure a more secure and conscientious AI landscape. Safeguarding against its misuse and enforcing balanced regulation, means that we can embrace the benefits and future of trustworthy AI.
The EO also acknowledges that AI heavily relies on a constant flow of data, including user and device information, some of which may be sent to entities outside the U.S., making the need for stronger requirements around identity verification even more necessary. As criminals find novel ways to use AI, we can fight fire with fire and use AI – in responsible ways – to thwart their efforts. Organizations who adopt AI-driven solutions have the power to detect anomalies, enemy bots and prevent fraud at massive scale. Identity verification will also play a major role in stopping attacks going forward, so stronger requirements around identity proofing, authentication and federation will be necessary.
As we continue to see further regulations emerge, the private sector must also take part in the effort and collaborate with public stakeholders to achieve more responsible AI worldwide.” – Andre Durand, Founder and CEO of Ping Identity
“The federal government doesn’t want to seem flat-footed in terms of understanding AI’s implications and forming strategies guided by that understanding. They don’t want a replay of social media going unchecked for years: (i) This compliance burden may benefit larger players like Google or Microsoft disproportionately by creating a bit of a barrier to entry for smaller vendors; (ii) Training data will now become critical and may result in unintended consequences since the perception of bias can itself be biased; (iii) The AI landscape should work toward clearer labeling about what content is human, AI-assisted, and purely AI, similar to GMO labeling standards.” – Vipul Vyas, SVP of Vertical Strategy for Persado
“The executive order on AI first and foremost promotes innovation – both the development of AI and use of AI in the public and private sector. And in line with promoting AI innovation, the order promotes innovation in the safe, secure and responsible use of AI. Promoting research and development instead of rules that lock down or seek to regulate AI at this point would only have the opposite intended effect. The Crypto Wars of the 1990s showed that open research and commercial innovation would lead to greater privacy.
What is important to keep in mind is that we should not race to create new regulation that slows down innovation and requires official certification – treating AIs today like weapons or pharmaceuticals. We need to promote research and innovation to achieve outcomes of standards, security, and safety instead of racing to apply rules and regulations from the last century. Technologies today from modern identity management to code signing can be used to operate AI safely and promote innovation. As the single largest customer for Silicon Valley, the executive order will have a huge impact on future developments on use of AI in the US, and around the world.” – Kevin Bocek, VP of Ecosystem and Community at Venafi
“I’ve been a long-time participant in AI Safety discussions since this was a niche topic among the Silicon Valley AI community, starting as early as 2018. It’s encouraging to see AI ex-risk acknowledged at the highest levels of the government, especially as more and more companies adopt AI content-generation capabilities. This is a step in the right direction.
There are parts of this executive order to really like, especially ideas around model provenance (ie. content authentication and watermarking) which are difficult to incentivize without government intervention. To be effective, this will also require buy-in from other international entities, so hopefully, this demonstrates enough thought leadership to encourage collaboration globally.
In general, it will be more straightforward to regulate model output, like watermarking. However, any guidelines around input (what goes into models, training processes, approvals) will be near impossible to enforce. But, it’s difficult to tell just how much this executive order will impact startups, as ultimately we will need to understand the extent to which companies must coordinate with the government via more specific guidelines. This will require time and expertise which are in limited supply.
Expanding on critiques – there’s a lot to like, but also a lot that needs clarity. How do you define “training”? How do you define “the most powerful AI systems”? What comprises “other critical information”? How do you determine if a model “poses a serious risk to national security”? The same paragraph then potentially contradicts itself by stating that not just “powerful AI systems” be regulated, but “companies developing any foundation model” must be regulated. Any engineer who reads this will likely scratch their head in confusion. We’ll need to see concrete guidelines on the order of engineering specifications, otherwise, these guidelines are unlikely to hold their weight. If it’s more than a one-way notification process and model approval is needed, this will be difficult to near impossible to enforce. It’s akin to regulating software development, which is an impossible task as software development is an unwieldy, non-standard, largely artistic process with no standard protocols. Unless clear guidelines are in place, it will expose companies to undue regulatory risks.” – Justin Selig, Senior Investment Associate for Eclipse
“President Biden has a long history of considering how changes — such as widespread adoption of AI — will impact Americans across the economic spectrum. I think it’s reasonable to expect that he’ll want to strike a balance that preserves the lead the United States enjoys in AI development while ensuring some degree of transparency in how AI works, equity in how benefits accrue across society, and safety associated with increasingly powerful automated systems.
Biden is starting from a favorable position: even most AI business leaders agree that some regulation is necessary. He is likely also to benefit from any cross-pollination from the dialogue that Senator Schumer has held and continues to hold with key business leaders. AI regulation also appears to be one of the few topics where a bipartisan approach could be truly possible.
Given the context behind his potential Executive Order, the President has a real opportunity to establish leadership — both personal and for the United States — on what may be the most important topic of this century.” – Jaysen Gillespie, Head of Analytics and Data Science, RTB House
“It’s encouraging that The White House is beginning to take AI seriously at a broader level, moving us away from the patchwork approach that has so far occurred at a state-by-state level. AI has the potential to drastically improve how governments operate, protect privacy at large, and promote innovation, but care must be taken to ensure that the regulations go far enough.
“Having been operating in the background of the world’s devices for years, the public are quickly adapting to the AI Age, and regulators must pave the way before it’s too late. It is great to hear that officials are taking it seriously, encompassing the real meaning of “AI” into our nation’s regulations, understanding how that will impact the public, and using future technological innovation across the US.” – Nadia Gonzalez, Chief Marketing Officer, Scibids
“The Executive Order on AI that was announced … provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning. The new Executive Order provides valuable insight for the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.
The emphasis in the Executive Order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting American’s privacy with the advent of new technologies like AI. Since the introduction of global privacy laws like the EU GDPR, we have seen numerous U.S. state level privacy laws come into effect across the nation to protect American’s privacy and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data. The various U.S. state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customizing, data collection, processing, etc.) generally require the following: the right for individual consumers to opt-out profiling and automated-decision making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI. The new Executive Order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements. The establishment of consistent national AI and privacy laws will allow U.S. companies and the government to rapidly develop, test, release and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI.” – Michael Leach, Compliance Manager at Forcepoint
“Because of AI’s growing impact on technologies, industries, and even society as a whole, it’s incredibly important that our current administration put a continued emphasis on security. While I applaud the government’s desire to ensure AI is safe, it’s also imperative that regulation is balanced with the speed of innovation. If we slow down AI innovation significantly, foreign companies could innovate faster than us and we risk falling behind in the AI race. While these rules are necessary, these regulations may only keep well-intentioned people in check and will ultimately have no impact on threat actors as they will not follow these rules. During this time, we’ll need to rely on the private cybersecurity sector to help provide us protections from these malicious threats.” – Daniel Schiappa, CPO, Arctic Wolf
“It is crucial for the government to foster an open AI ecosystem, especially for startups. Cloud vendors monopolizing AI after heavy investments is akin to privatizing the electric grid. Such monopolization would stifle innovation and deter smaller players from contributing to the AI evolution. Right now the market does not need an AI grid, it needs an “AI mesh” with the ability to arbitrage quickly between one player to another.
Biden’s AI executive order underscores the immense potential that AI holds for blue chip companies. By adhering to standardized practices and leveraging AI’s capabilities, these established entities can accelerate their growth, increase efficiency, and potentially lead new waves of innovation.
While the EU leans towards stricter AI regulation, the US is striking a balance between innovation and responsible usage. This approach not only ensures the safe and ethical development of AI, but also positions the US as a leader in the global AI arena, fostering innovation while safeguarding public interests. The ability of the US government to invest heavily in AI will create opportunities for new AI players to grow into an initial market at a scale that no other country can replicate.” – Florian Douetteau, co-founder & CEO of Dataiku
“As someone living outside the US, I found the US President’s long-awaited executive order on AI remarkable, for three reasons: (1) The fact that it found a way to ride on existing laws and frameworks, rather than a new AI law; (2) The fact that it plans to use the US government’s (non-trivial!) procurement heft to drive traction in a messy space; (3) The fact that it is overwhelmingly focused on “here and now” dangers – e.g., misinformation, security and privacy threats, fraud, physical safety, privacy, impact on the workforce – vs. the potential longer term dangers of AGI.
(1) is presumably at least partially out of necessity, given US legislative challenges. But it is an approach that others considering new AI laws may want to explore; (2) is not an option that is open to every other country, but some like the EU, China and possibly India will probably try; (3) is just a pragmatic assertion of how much we have to do with the more “mundane” challenges of today, before we get to tomorrow’s existential ones.” – Shameek Kundu, head of financial services for TruEra
“AI technologies have been at the forefront of society in the last year. But while conversational and other types of AI have had a significant impact on organizations, every now and then, the output from AI can be dramatically incorrect. Between this, the increasing democratization of data across organizations, and the occasional faultiness of AI due to bias and issues such as hallucinations, organizations must work hard to ensure safe use of AI.
This executive order from the Biden administration – while directed at federal organizations – follows similar plans by other countries and the EU and is an important step towards ensuring responsible AI use. It will force many organizations to reevaluate their own processes and how they ethically leverage the technology.
The obvious solution is for companies to provide better training data to AI models, but since, increasingly, organizations rely on already pre-trained models from others, this is often outside of their control. And although this may help with bias, it won’t eliminate AI hallucinations. Depending on the criticality of the application, companies must establish guardrails by maintaining decision-making control, add guidelines that will later be applied before the output is used, or ensure there’s always a human involved in any process involving AI technologies. Looking ahead, I’d expect to see a larger tactical focus on establishing such types of AI controls. Large models built on massive amounts of data will become better but will never completely be hallucination-free.
This legislation is pushing the industry to focus on better AI integrations and tools supporting and allowing auditability of those integrations. With data science being democratized and more eyes than ever on AI regulation, now is the time for organizations to put systems in place to ensure responsible AI use.” – Michael Berthold, CEO of KNIME
“The White House’s call for a ‘Safe, Secure, and Trustworthy Artificial Intelligence’ underscores the growing global apprehensions regarding AI’s impact on both individuals and organizations. As we continue to see regulations being developed across the world, the current executive order from the Biden administrations will have a major impact not only on how the government and organizations create internal guidelines to leverage AI technologies
AI has transformed the data landscape; however, careful examination of data sharing practices and recipients is much needed. A responsible data-centric approach should underpin every AI implementation, and the standards being set for government agencies should also be upheld across private and public organizations as well. It is crucial to acknowledge that once AI systems gain access to data, that information becomes an integral part of the system permanently; we cannot afford to trust AI technology without the appropriate controls and set responsibilities.” – Rehan Jalil, President & CEO, Securiti
“President Biden’s executive order on AI is a timely and critical step, as AI-enabled fraud becomes increasingly harder to detect. This poses a serious threat to individuals and organizations alike, as fraudsters can use AI to deceive people into revealing sensitive information or taking actions that could harm them financially or otherwise.
In light of this growing threat, organizations must elevate the protection of their users. This can be accomplished by developing and implementing standards and best practices for detecting AI-generated content and authenticating official content and user identities. Which can be done through tactics such as deploying biometrics-based authentication methods, including fingerprint or facial recognition, and conducting continuous content authenticity checks.
Organizations must act now to protect their users from increasingly sophisticated AI-enabled fraud and deception methods. Enhancing identity verification tactics is essential to mitigate this risk.” – Stuart Wells, CTO of Jumio
“Governments and regulatory bodies pay little attention to the notion of “watermarking” AI-generated content. It is a massive technical undertaking, as AI-generated text and multimedia content are often becoming indistinguishable from human-generated content. There can be two approaches to “watermarking” – either have reliable detection mechanisms for AI-generated content or force the watermarking so that AI-generated content is easily recognized.
Publishing and indexing machine-generated content has been a concern for a good 10 years now (for instance, Google would not index machine-generated content), and now the concerns are increasing sinceAI content is often not distinguishable from human generated content .” – Olga Beregovaya, VP of AI and Machine Translation at Smartling
“The White House executive order is the U.S. government’s first-ever to require new safety assessments on AI. Even more than the cloud, the iPhone, the internet, and the computer, both government and industry anticipate AI to be a transformative technology, on par with breakthroughs like electricity and the discovery of fire. What makes AI so revolutionary is its potential for exponential growth, which is compared by some to Moore’s Law. This upward trajectory has major impacts for the workforce and the future of work as we know it.
As the Executive Order indicates, it is the responsibility of both the public and private sectors to manage AI and its growth potential, as well as its risk factors. For the workforce and people on the frontlines of this technology, we must prioritize safe, secure and trusted applications of artificial intelligence technologies.
AI has the potential to expand employees’ capabilities by empowering them with data insights, automation, and heightened productivity, and financial institutions and economists anticipate these benefits could translate to trillion-dollar increases in global GDP. In pursuit of this growth, workers in both the private and public sector must be empowered to successfully adopt AI technologies and deploy them appropriately to solve key challenges. Companies and institutions must balance the economic and productive benefits of AI, while maintaining data privacy and cybersecurity best practices. Additionally, technology leaders must take steps to mitigate potential AI bias and hallucinations, while empowering people to use AI safely and responsibly. President Biden’s executive order further establishes the need for enterprises to properly implement these necessary guardrails, and we look forward to championing the responsible growth of the AI category.” – Billy Biggs, VP of Public Sector at WalkMe
“There has never been faster adoption of any technology than what we’ve seen with Generative AI, ML, and LLMs over the past year. A prime example of such rapid adoption and disruption is the public letter by Satya Nadella, CEO of Microsoft, where it was announced that all Microsoft products are or soon will be co-pilot enabled – this is just the starting point.
The most recent AI Executive Order demonstrates the Biden administration wants to get ahead of this very disruptive technology for its use in the public sector and desires to protect the private sector by requiring all major technology players with widespread AI implementations to perform adversarial ML testing. The order also mandates NIST to define AI testing requirements, which is critical because no one can yet say with confidence that we, as a tech industry, exhaustively know all the ways these new AI implementations can be abused.” – Tim MalcomVetter, Executive Vice President, Strategy, NetSPI
“President Biden’s executive order on AI is certainly a step in the right direction and the most comprehensive to date; however, it’s unclear how much impact it will have on the data security landscape. AI-led security threats pose a very complex problem and the best way to approach the situation is not yet clear. The order attempts to address some of the challenges but may end up not being effective or quickly becoming outdated. For instance, AI developers Google and OpenAI have agreed to use watermarks but nobody knows how this is going to be done yet, so we don’t know how easy it’s going to be to bypass/remove the watermark. That said, it is still progress and I’m glad to see that.” – Platform.sh’s VP, Data Privacy & Compliance, Joey Stanford
“As one of the industry’s earliest innovators in generative AI, we applaud President Biden’s executive order on safe, secure, and trustworthy AI.
Corporate control over models and the data they are trained on is critical. As the White House’s announcement called out, “AI not only makes it easier to extract, identify, and exploit data – it also heightens incentives to do so because companies use data to train AI systems.” Given this, protecting our privacy with AI is incredibly important. And it’s not just about Americans’ privacy; it’s also about intellectual property and copyright held by business entities. Big Tech has been completely unconstrained in their competitive practices for the last 25 years, and unsurprisingly their monopolistic tendencies are now playing out across AI. Case in point: there are currently pending lawsuits against the companies behind the large scale models for copyright infringement, and directly against Microsoft, in particular, for training its code-generation models on code sourced from private code repositories without the permission of the code creators. Data used in models must be explicitly allowed and fully transparent, an ongoing and persistent problem for AI that urgently needs to be dealt with.
As the oldest standing company specifically focused on generative AI to support the software development lifecycle (SDLC), we believe that developing “principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection” are critical. In fact, we have seen that the adoption of AI by software developers is a great example of where AI is proving to be beneficial to workers. Is AI replacing technology professionals through automation (as many feared will happen)? No. What we’re actually seeing as people begin to adopt AI tools in SDLC is an uplevel of their work – an overall increase in productivity and real-time learning and skills development. This is huge. Software roles have a double-digit negative unemployment rate today, and there is tremendous pressure on salaries driving up cost. The number of potential workers for available jobs has been in a decline well before the pandemic and, while there may be fewer workers per available job, we are not making less work. We’re facing a very real macroeconomic crisis where we need to find more worker productivity. With AI, what we’re seeing within at least this one role and category is an increase in productivity that benefits employers combined with elimination of repetitive and frustrating tasks that are increasing employee satisfaction and the value of their work.
We also applaud the White House for promoting a “fair, open, and competitive AI ecosystem” by providing both small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission (FTC) to exercise its authorities.” As we know, Big Tech is proving they want to expand their aggressively competitive practices to capture the entire AI stack. Companies like Microsoft invested heavily in OpenAI because they wanted to control not just the algorithms and building blocks of AI, but also deploy them down to every single product category and potential application. These behemoths want to control a fully integrated stack that ensures there is no meaningful competition. It appears the White House is also seeing this – and creating opportunities for small businesses — the lifeblood of the American economy — to double down on innovation.” – Peter Guagenti, President of Tabnine
“While there should be regulation that protects people from the dangerous effects of “bad AI” like deep fakes for example, the prevalence of these types of malicious AI use cases is vastly overblown. Most AI can be defined as “good AI,” or AI that enhances human productivity, and it’s not as scary or all encompassing as people fear. As we’ve seen time and time again, it cannot be relied upon to produce consistently accurate results without ever-present human oversight. Think of generative AI like an editor or a copywriter; it’s a tool that makes you faster and better at your job.
Government regulation has proven to slow down innovation, and I worry about forms of regulation where there are too many restrictions that stop good people from collaborating quickly and freely.
Generative AI is impressive and amazing, but it won’t be the magic pill for everything you need. Just like with the invention of the first computer or the internet, AI will make us more efficient and better at our jobs and create new startup growth.” – Dan O’Connell, Chief Ai and Strategy Officer at Dialpad
“In the wake of the US’ AI executive order, we’re witnessing a seismic shift in the AI landscape. It’s not just about adhering to regulations; it’s about embracing observability as the linchpin for responsible AI. This executive order is a call to action, a call to prioritize transparency, accountability, and ethics in AI. Observability isn’t just a compliance requirement; it’s the new standard, a pathway to enhancing customer trust and driving innovation. It’s a pivotal moment for the AI industry, and I have hope that this executive order will set the stage for a more responsible and innovative future.” – Liran Hason, co-founder & CEO at Aporia
“President Biden’s swift Executive Order on AI will steer the industry in a positive direction, while placing the United States at the forefront of AI innovation and responsible governance. This move, coupled with the establishment of a comprehensive framework for responsible AI development and deployment, is essential in fostering greater trust in the technology across all industries. It is imperative to strike a balance between innovation and regulation to guard against misuse and risks, as seen throughout history when the US government has regulated powerful technologies like nuclear fission, genetic engineering and even seat belt requirements for automobiles. Companies must embrace this balance by developing and implementing stringent guardrails to uphold ethical AI use, preventing any unintended consequences.” – Gopi Polavarapu, Senior Vice President at Kore.ai
“Until now, the United States treated guidance for the use of AI as exactly that, recommended guidance, and even the EO doesn’t mean enforced regulation. This laissez-faire approach to regulation doesn’t compare to what the rest of the world is already doing, and the U.S. should model the participation of different stakeholders in the regulatory process and incentivize safe innovation.
While other regions such as the EU arrived at this point some time ago, the UK is now tackling forward-looking, more daunting topics like frontier AI and AI existential risk, as seen at this week’s AI Summit. However, we need to first address existing issues such as AI bias or discrimination before we even think to solve the others. Coming out of the Summit, we need to walk the fine line between future and existing issues if we hope to truly shape AI governance.” – Sumsub’s AI Policy and Compliance Specialist Natalia Fritzen
“Governments and organizations around the globe have begun to realize the pressing concern and need for a unified process, measurement metric, systems, tools, and methods to regulate the development and monitoring of AI systems for the larger good of the world. On the other hand, AI and analytics organizations developing and distributing AI solutions to client organizations across verticals have long been following methods and processes to offer responsible, ethical, and explainable AI systems and solutions that carefully balance innovation and impact.
The Biden administration’s wide-ranging executive order is a move to streamline the development and dissemination of AI systems, including but not limited to healthcare, human services, and dual usage foundation models. The executive order balances optimism about the potential of AI with considerations of risk, privacy, and safety from using such systems if unmonitored. The executive order stresses the need for existing agencies and bodies to come together and provides a directive for these organizations to formulate cohesive tools to understand AI systems better and create oversight.” – Genpact Global AI/ML Services Leader Sreekanth Menon
“I think that it is important to look at this and appreciate that it is a bit late to the game. Most people are uncertain when we would even have enough data to train a model like GPT5. I think it is important that we understand the data sets and make sure that we don’t create propaganda spewing bots, but at the same time people are already going to claim one model or another isn’t right and make it political. You can create hate speech and all kinds of things now with these models and these laws won’t change that. You have dark web-based models for hackers. These laws sound great in principle but at the end of the day they aren’t going to matter because the genie is already out of the bottle and there is no way to put it back in. Companies are going to start looking at smaller models that are more cost effective and specifically trained for their use case rather than worry about huge models whose cost they can’t afford. This order doesn’t stop any of that which is the direction people will be headed.” – Dr. Ryan Ries, the Data & ML Practice Lead at Mission Cloud
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW