• October 3, 2024, 3:57 pm

EU’s powerful AI Act is here. But is it too late? SoftAIT

Reporter Name 105 Time View
Update : Tuesday, December 12, 2023

[ad_1]

The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. DepositPhotos

European Union officials made tech policy history last week by enduring 36 hours of grueling debate in order to finally settle on a first of its kind, comprehensive AI safety and transparency framework called the AI Act. Supporters of the legislation and AI safety experts told PopSci they believe the new guidelines are the strongest of their kind worldwide and could set an example for other nations to follow.  

The legally binding frameworks set crucial new transparency requirements for OpenAI and other generative AI developers. It also draws several red lines banning some of the most controversial uses of AI, from real-time facial recognition scanning and so-called emotion recognition to predictive policing techniques. But there could be a problem brewing under the surface. Even when the Act is voted on, Europe’s AI cops won’t actually be able to enforce any of those rules until 2025 at the earliest. By then, it’s anyone’s guess what the ever-evolving AI landscape will look like. 

What is the EU AI Act? 

The EU’s AI Act breaks AI tools and applications into four distinct “risk categories” with those placed on the highest end of the spectrum exposed to the most intense regulatory scrutiny. AI systems considered high risk, which would include self-driving vehicles, tools managing critical infrastructure, medical devices, and biometric identification systems among others, would be required to undergo fundamental rights impact assessments, adhere to strict new transparency requirements, and must be registered in a public EU database. The companies responsible for these systems will also be subject to monitoring and record keeping practices to ensure EU regulators the tools in question don’t pose a threat to safety or fundamental human rights. 

It’s important here to note that the EU still needs to vote on the Act and a final version of the text has not been made public. A final vote for the legation is expected to occur in early 2024. 

“A huge amount of whether this law has teeth and whether it can prevent harm is going to depend on those seemingly much more technical and less interesting parts.”

The AI Act goes a step further and bans other use cases outright. In particular, the framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. This could potentially impact well known facial recognition startups like Clearview AI and PimEyes, which reportedly scrape the public internet for billions of face scans. Jack Mulcaire, Clearview AI’s General Counsel, told PopSci it does not operate in or offer its products in the EU. PimEyes did not immediately respond to our request for comment. 

Emotion recognition, which controversially attempts to use biometric scans to detect an individual’s feeling or state of mind, will be banned in the workplace and schools. Other AI systems that “manipulate human behavior to circumvent their free will” are similarly prohibited. AI-based “social scoring” systems, like those notoriously deployed in mainland China, also fall under the banned category.

Tech companies found sidestepping these rules or pressing on with banned applications could see fines ranging between 1.5% and 7% of their total revenue depending on the violation and the company’s size. This penalty system is what gives the EU AI Act teeth and what fundamentally separates it from other voluntary transparency and ethics commitments recently secured by the Biden Administration in the US. Biden’s White House also recently signed a first-of-its kind AI executive order laying out his vision for future US AI regulation. 

In the immediate future, large US tech firms like OpenAI and Google who operate “general purpose AI systems” will be required to keep up EU officials up to date on how they train their models, report summaries of the types of data they use to train those models, and create a policy acknowledging they will agree to adhere to EU copyright laws. General models deemed to pose a “systemic risk,” a label Bloomberg estimates currently only includes OpenAI’s GPT, will be subject to a stricter set of rules. Those could include requirements forcing the model’s maker to report the tool’s energy use and cybersecurity compliance, as well as calls for them to perform red teaming exercises to identify and potentially mitigate signs  of systemic risk. 

Generative AI models and capable of creating potentially misleading “deepfake” media will be required to clearly label those creations as AI-generated. Other US AI companies that create tools falling under the AI Act’s “unacceptable” risk category would likely no longer be able to continue operating in the EU when the legislation officially takes effect. 

[ Related: “The White House’s plan to deal with AI is as you’d expect” ]

AI Now Institute Executive Director Amba Kak Tod spoke positively of the AI Act, telling PopSci it was a “crucial counterpoint in a year that has otherwise largely been a deluge of weak voluntary proposals.” Tod said the red lines barring particularly threatening uses of AI and new transparency and diligence requirements were a welcome “step in the right direction.” 

Though supporters of the EU’s risk-based approach say it’s helpful to avoid subjecting  more mundane AI use cases to overbearing regulation, some European privacy experts worry the structure places too little emphasis on fundamental human rights and detracts from past the approach of psst EU legislation like the 2018 General Data Protection Regulation (GDPR) and the Charter of Fundamental Human Rights of the European Union (CFREU).

“The risk based approach is in tension with the rest of the EU human rights frameworks, “European Digital Rights Senior Policy Advisor Ella Jakubowska told PopSci during a phone interview. “The entire framework that was on the table from the beginning was flawed.” 

The AI Act’s risk-based approach, Jakubowska warned, may not always provide a full, clear picture of how certain seemingly low risk AI tools could be used in the future. Jakubowska said rights advocates like herself would prefer mandatory risk assessments for all developers of AI systems.

“Overall it’s very disappointing,” she added. 

Daniel Leufer, a Senior Policy Analyst for the digital rights organization AccessNow echoed those concerns regarding the risk-based approach, which he argues were designed partly as a concession to tech industry groups and law enforcement. Leufer says AccessNow and other digital rights organizations had to push EU member states to agree to include “unacceptable” risk categories, which some initially refused to acknowledge. Tod, the AI Now Institute Executive Director, went on to say the AI Act could have done more to clarify regulations around AI applications in law enforcement and national security domains.

An uncertain road ahead 

The framework agreed upon last week was the culmination of years’ worth of back and forth debate between EU member states, tech firms, and civil society organizations. First drafts of the AI Act date back to 2021, months before OpenAI’s ChatGPT and DALL-E generative AI tools enraptured the minds of millions. The skeleton of the legislation reportedly dates back even further still to as early as 2018. 

Much has changed since then. Even the most prescient AI experts would have struggled to imagine witnessing hundreds of top technologists and business leaders frantically adding their names to impassioned letters urging a moratorium on AI tech to supposedly safeguard humanity. Few similarly could have predicted the current wave of copyright lawsuits lodged against generative AI makers questioning the legality of their massive data scraping techniques or the torrent of AI-generated clickbait filling the web. 

Similarly, it’s impossible to predict what the AI landscape will look like in 2025, which is the earliest the EU could actually enforce its hefty new regulations. Axios notes EU officials will urge companies to agree to the rules in the meantimes, but on a voluntary basis.

The post EU’s powerful AI Act is here. But is it too late? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

[ad_2]


আপনার মতামত লিখুন :

One response to “EU’s powerful AI Act is here. But is it too late? SoftAIT”

  1. There’s definately a lot to learn about this subject.
    I love all the points you made.

Leave a Reply

Your email address will not be published. Required fields are marked *

More News Of This Category
https://slotbet.online/