Skip to main content

One post tagged with "aia"

View All Tags

· 4 min read
Sean Radel

Main characteristics and elements of the E.U.'s Artificial Intelligence Act (AIA)

The E.U.’s Artificial Intelligence Act (AIA) is a legal framework likely to be implemented in early 2024 that governs the sale and use of AI. The AIA is similar to GDPR, the Digital Services Act, and the Digital Markets Act in the fact that it regulates the digital economy. All AI systems that are “placed on the market, put into service or used in the EU” are subject to the regulation with the following three exceptions: AI systems for military and national security purposes, free and open source AI systems, and systems built for scientific research. (Hoffman, 2023) The regulation is based on risk categories: unacceptable, high, low, or minimal risk. (Wörsdörfer, 2023 ) and will prohibit the system if it poses an unacceptable risk. In the case of unacceptable use, the regulation wants to counter systems that are labeled manipulative, exploitive, or aimed at social control. Systems are considered high-risk if they are already subject to a different safety regulation (toys, medical devices) or if they fall into the following use cases: biometrics, critical infrastructure, education and vocational training, employment, workers management, and access to self-employment, access to essential services, law enforcement, migration, asylum, and border control management, administration of justice and democratic processes (Hoffman, 2023). High-risk systems are subject to a conformity assessment to ensure they meet all AIA standards as well as they must submit their service to an E.U. database that lists all high-risk AI services to the public (Wörsdörfer 2023). Low and minimal-risk systems have a transparency obligation to fulfill. User’s must be notified that they are interacting with artificial intelligence when the AI system has any of the following features: detecting emotions or determining associations with social categories based on biometric data, or generating and manipulating image, audio, or video content (Wörsdörfer 2023).

AIA's strengths and weaknesses from a computer ethics perspective

From a computer ethics perspective, AIA has good intentions but still shortcomings. AIA protects users from potentially harmful technologies and increases visibility of what high-risk systems they are opting in to. AIA’s position on unacceptable risk is good because it protects user’s from corporations using the technology for malicious purposes, but contradictory because there are exemptions for certain usages. The regulation does not prohibit the military or national security services from using AI for malicious purposes, so it may not really help at all. Even the exemption for research could be potentially dangerous and may require something akin to the IRB to dictate whether the AI will be truly ethical or not.

Possible reform measures that could help to strengthen the AIA

The first aspect of the AIA that needed reform was that developers determine their risk category. If the developer fails to label their risk category correctly, they are subject to a 20 million euro fine, or 4% of their global turnover. I believe that this is a very high fine, considering how high-level or abstract the current risk categories are. I think that it will be challenging for developers to understand the regulation at it’s early inception, and this could leave companies vulnerable. I think a possible change to the system is to fine the firm based on the effect of the mislabeling and create a structured process for evaluating risk levels. I think the fine should scale with the severity of the violation. The risk system is the second aspect of AIA that I think should face reforms. I think that with the rapid development of AI, developers may try to work around the high-risk labeling system and create systems that are functionally high-risk but legally not. Finally, something that is still unclear to me with AIA and GDPR is if AI models are trained on data that violates GDPR, what happens to the model? I think we are entering a foggy legal territory, but personally, I think AI trained on data that violates GDPR should be at an unacceptable level of risk and banned.

References:

Hoffman, S. (2023, September 26), The EU AI Act: A Primer, CSET Georgetown https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/ Wörsdörfer, Manuel, The E.U.’s Artificial Intelligence Act: An Ordoliberal Assessment (August 17, 2023). Forthcoming in: AI and Ethics, Available at SSRN: https://ssrn.com/abstract=4544276 or http://dx.doi.org/10.2139/ssrn.4544276 Regulation of the European Parliament and of the council ... - eur-lex. (n.d.). https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF