In this contributed article, Aayam Bansal explores the increasing reliance on AI in surveillance systems and the profound societal implications that could lead us toward a surveillance state. This piece delves into the ethical risks of AI-powered tools like predictive policing, facial recognition, and social credit systems, while raising the question: Are we willing to trade our personal liberties for the promise of safety?
Industry Leaders Call for House of Representatives to Draw Better Distinction Between AI Players Across Legislative Frameworks
A group of AI integration industry leaders submitted a joint letter to the US House of Representatives Artificial Intelligence Task Force calling upon them to redefine roles and responsibilities for AI actors across the value chain. The letter was submitted to the House of Representatives leadership by Alteryx, Salesforce, Twilio, Box, Kyndryl, and Peraton.
RAND AI Governance Series: How U.S. policymakers can learn from the EU AI Act
RAND kicked off a new series of short reports designed to provide policymakers in the U.S. with potential learnings from the EU AI act on key facets of AI governance. The series, written by researchers on both continents, highlights the need for deepening collaboration between the EU and the U.S. as any regulatory progress in these regions will have far-reaching effects on the broader societal, legal and ethical consequences of AI adoption globally.
How AI Enhances Government Payment Processes – Survey Reveals Critical Inefficiencies
In this contributed article, Niko Spyridonos, CEO and Founder of Autoagent Data Solutions, discusses local governments and their use of AI and automation technologies. Niko is a strong proponent of such systems and he has some government survey data that shows we need to move deeper in this area.
Trend Micro Strengthens AI Deployments for Enterprises and Governments with NVIDIA AI Enterprise
Trend Micro Incorporated (TYO: 4704; TSE: 4704) has launched multiple efforts to shape the future of AI implementation by enterprises and governments. The new solution, included in Trend Micro’s Vision One™ Sovereign Private Cloud — powered by NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform — will allow organizations to maximize the potential of the AI era while maintaining business resilience.
HPC/AI User Forum Event September 4-5 at Argonne National Laboratory, Lemont, Illinois Features Real-World Uses of AI in Industry, Science & Energy
The HPC User Forum, established in 1999 to promote the health of the global HPC industry and address issues of common concern to users, has opened registration for its upcoming meeting, September 4-5, 2024, at Argonne National Laboratory in Lemont, Illinois.
The Tech Tracking 2024 Presidential Ad Campaigns
A new searchable database allows the public to examine groups running social media ads that mention U.S. presidential candidates, including secretly coordinated pages that are running identical videos or messages. The project is supported by a grant and use of analytics software from Neo4j, a leading graph database and analytics company.
Big Tech is Likely to Set AI Policy in the U.S. We Can’t Let That Happen
In this contributed article, Dr. Anna Becker, CEO and cofounder of Endotech.io, discusses explains why President Biden’s recent executive order and approach to regulating Artificial Intelligence puts American innovation at risk, by likely favoring the views and interests of large established tech companies rather than startups.
Generative AI Highlights the Need for Identity Verification
In this contributed article, Mark Lieberwitz, Co-Founder & CPO of KarmaCheck, takes a look at how verified identities help restore trust in digital content, combat the erosion of trust caused by generative AI, and prevent the dissemination of false narratives. Implementing identity verification must be accompanied by robust privacy measures and industry-wide collaboration to protect user data and establish consistent standards.
ChatGPT, Crime & the Impact on Law Enforcement
Our friends over at Cognyte recently released a report about ChatGPT, crime and the impact on law enforcement authorities: “ChatGPT and Crime – What Law Enforcement Needs to Know about Large Language Models.” Hundreds of millions of users around the world are using AI bots, such as ChatGPT, which are powered by Large Language Model (LLMs). This rapidly evolving technology has the potential to allow criminals and bad actors to easily scale up cyber-crime, financial crime, human trafficking, disinformation and other illicit activities.