The White House has just announced that it accepted pledges from a number of high-profile tech companies for the safe development of AI. The Fact Sheet for today’s meeting can be found here. Seven companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic and Inflection — convened at the White House today to announce the voluntary agreements. Here are President Biden’s comments after the meeting:
Here are a couple of commentaries we received from our friends in the big data ecosystem:
Commentary by Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai
“With all emerging technology, the establishment of clear legal frameworks is necessary to ensure the technology is used safely and fairly. The requirement of developers to publish the authors of material used in chatbot training is an important measure for making sure that authors are accredited, and the protections against overt surveillance have obvious benefits.
More regulation is coming, and the development of AI models has a role to play. Transparency into how the models work will be crucial in promoting trust and accountability, while making sure regulation is being adhered to. Another key aspect is explainability. AI systems that provide understandable explanations for their decisions will not only enhance transparency but also help to combat the biases in some models and prevent discriminatory practices from taking place.
It is important to recognize that AI governance is a complex and evolving field. The genie is already out of the bottle, and while regulators catch up with this hugely powerful technology, organizations developing AI can help to make sure its potential is harnessed for the benefit of everyone.”
Commentary by Aaron Mendes, CEO of PrivacyHawk
“It’s nice to see big tech pledging to be responsible with AI. This move by the White House primarily helps with misinformation. Now we need more commitments to help protect consumers from the dangers of AI, particularly how their privacy can be violated, and personal data can be used for scams, fraud, and other cybercrimes. Even if they do some work on protecting consumers, it’s still important for individuals to reduce their digital footprint before it’s too late. Once malicious AI models have gobbled up all of our publicly available personal data, it’s too late to take it back.”
Commentary by Dan Schiappa, CPO of Arctic Wolf
“It is positive to see the top leaders in AI working closely to align on standards and ethical commitments at this point in the AI boom. AI development is happening at warp speed, and any amount of delay from regulators could be detrimental from a cybersecurity perspective. With the heavy emphasis on security testing and risk management in these standards, it’s safe to say that there is greenfield opportunity for cybersecurity leaders to follow suit and work hand in hand with AI leaders to accelerate innovation safely and faster than potential threats.
It is important to note that the cybersecurity industry can play an integral role in identifying efforts to circumvent these standards and I hope this will lead to a great partnership between both AI and cybersecurity industries. While we know it’s illegal to conduct cybersecurity attacks, they still continue to happen and affect organizations of all industries and sizes. So, while standards keep well-intentioned organizations in control, it will not prohibit other from using this technology without us. Nevertheless, I believe this is a step in the right direction.”
Commentary by Jacob Beswick, Director of AI Governance Solutions at Dataiku
“We’re encouraged to see the steps the Biden Administration has taken on AI safety, security, and trust. While these voluntary commitments are a first step, there is certainly more to be considered and acted on, including how the identified risks will propagate across value chains and not just with the developers of these technologies. Setting expectations and proposing relevant actions across value chains, from organizations developing, to organizations and individuals operationalizing the models of interest to organizations and individuals consuming outputs of these models, is critical to ensuring the three principles are meaningfully achieved.”
Commentary by Shane Orlick, President at Jasper
“AI will affect all aspects of life and society—and with any technology this comprehensive, the government must play a role in protecting us from unintended consequences and establishing a single source of truth surrounding the important questions these new innovations create: including what the parameters around safe AI actually are. The Administration’s recent actions are promising, but it’s essential to deepen engagement between government and innovators over the long term to put and keep ethics at the center of AI, deepen and sustain trust over the inevitable speed bumps, and ultimately ensure AI is a force for good. That also includes ensuring that regulations aren’t defusing competition creating a new generation of tech monopolies — and instead invites all of the AI community to responsibly take part in this societal transformation.”
Commentary by Ivan Ostojic, Chief Business Officer of Infobip
“Private companies should experiment with caution. Regulators are still trying to wrap their heads around the technology. The pace of the regulation is slow, even though the pace of technology we see changes every day. So a precedent must be set for the industry to move forward with consideration and caution. Sam Altman’s testimony before Congress highlighted the need for responsible regulation and the potential impact of AI on jobs and democracy. Such as guidelines for ethical use, control mechanisms for certain contexts, reskilling policies, and regulations related to user data and democracy.”
Commentary by Nitzan Shaer, co-Founder and CEO of WEVO
“The promise of AI holds deep ramifications for the consumer internet, and the experience we have every day in an increasingly AI-driven world. While these developments recognize the potential of AI, the commitments also highlight the importance of reliable and responsible user testing, in order to address bias and discrimination. While AI’s transformative impact on our daily lives is all but inevitable, we must embrace responsible use of the technology by prioritizing safeguards and guardrails to maximize its potential while mitigating risks. For instance, Alphabet’s new tool to help journalists is validation for widely accepted forecasts that AI could generate the majority of the internet’s content within a decade. Given this explosion it is crucial we remain vigilant about vetting information produced by AI for accuracy and relevancy and always put the end user first, prioritizing their experience while minimizing bias. That is why at WEVO, we combine AI-driven insights with human intelligence to provide unbiased feedback that cuts through the noise and resonates with your target audiences, while avoiding the negative consequences of letting untested AI determine user experience.”
Commentary by Bhavin Shah, CEO & co-founder at Moveworks
“Most everyone agrees that there needs to be some form of regulation in AI – it’s synonymous with safety. The challenge is that there is endless nuance when it comes to AI. Current attempts at regulating the space lack the specificity needed to provide impactful guardrails while still leaving room for true AI innovation. Given the complexities of AI technology, there isn’t one clear path to enforcing these regulations, however, it’s important that our representatives are educated enough on the space to ask the right questions. Is the main goal to protect consumers? Are we trying to regulate the technology itself, like large language models or transformer models? Or are we trying to protect data? Without this level of detail, innovation, investing, and even hiring will be stifled. Thus far, the White House has issued blanket regulations that lack the level of comprehension needed to keep up with the current pace of innovation.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW