Last week, the EU’s AI Act finally went into effect following final approval from EU member states, lawmakers, and the European Commission.
The AI Act aims to govern how companies develop, use, and apply AI. This will be done via a risk-based approach to regulation. Each AI application will be monitored and categorized based on the “risk it poses to human society.” For applications that can threaten human privacy or safety, the application must follow strict guidelines and adhere to EU AI monitoring policies.
Some of the guidelines for AI applications include adequate risk assessment, high-quality training data sets, bias mitigation, logging AI activity, mandatory sharing of documents with authorities, and more.
“Historically, there has been extensive debate between the goal of making money, with innovation as its proxy, and responsibility to society,” commented Iddo Kadim, field CTO at NeuReality. “The intent of the AI Acts encourages and rewards responsible AI innovation. As with any regulation, the actual interpretation and implementation will determine how successful the regulation is in achieving their goal. As of now, it is yet to be seen. Ultimately, sustainable and cost efficient AI solutions should be the goal of organizations across the globe. For society to trust AI it must be safe, secure and sustainable. How could societies really trust an AI that makes the planet less habitable and society more dangerous? We must create an environment in which the companies with no regard for people or planet will find themselves struggling more than the rest. I am happy to see a good effort on protecting people’s privacy when working with AI but more must be done for the planet in terms of AI development.
Overall, the companies most affected by these regulatory measures are the ones that build AI applications. The more risk associated with their application, there will be more eyes on their product to ensure it meets regulatory requirements or run the risk of devastating consequences, both financially and reputation. Companies that build infrastructure for AI development and deployment can help companies that build AI applications by implementing and enforcing privacy, security controls and helping minimize energy consumption.
The AI Act might lead to a few possible outcomes which include:
- Companies that operate outside the EU could delay entering it, or avoid it altogether.
- Some companies may implement their products with region specific feature sets.
- Companies that choose to meet the requirements might incur additional costs at first, especially if implementing new systems and measures that were not put in place before, but those incremental costs are likely to be less significant over time as they become part of the standard operation for these companies.
- The act would deter very bad actors from “adding” to the most dangerous or unintended consequences of AI – like AI driven cyberattacks, ransomware.
If GDPR is a reference for this regulation, then ultimately a large majority of companies will learn what it takes to adhere to the new regulation and simply integrate them as a standard detail of how they do business.”
As an infrastructure company, NeuReality offers a distinctive inference and serving solution designed to boost the operational efficiency of AI applications. With insights gained from real-world AI deployments across the globe, NeuReality has developed robust capabilities within its platform. This positions the company uniquely to help organizations safeguard user data effectively.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW