The Artificial Intelligence Act (AI Act) is a European Union (EU) law regulating the usage of artificial intelligence (AI). It’s the first major law in the world for AI technologies. The AI Act establishes a common regulatory and legal framework for AI technologies within the EU.
The AI Act will go into effect on 1 August 2024, with provisions coming into force gradually over the following 6 to 36 months.
The Act will have a gradual but comprehensive adoption process, giving businesses and organizations sufficient time to understand and adapt to the new regulations. The full implementation of the EU AI Act is expected to be fully operational by 2026.
The EU AI Act- World’s First Major Law for AI
The Artificial Intelligence Act is a comprehensive legal framework first proposed by the European Commission in April 2021. The EU initially was planning to set guidelines and recommendations for the use of AI, but the growing influence of AI led to the new regulation.
The EU AI Act is the world’s first major law for AI, designed to regulate the use of AI across various sectors, ensuring that AI systems are safe, transparent, ethical, and respect fundamental human rights.
The law covers all types of AI across various sectors, with exceptions for military, national security, research, and non-professional purposes.
Note that the AI Act does not provide rights to individuals but regulates the AI systems and entities using AI in a professional context.
What Does the EU AI Act Apply to?
There are four levels of potential risk for using AI applications, plus an additional category for general-purpose AI. The AI Act applies mainly to one of the categories— the high-risk AI systems. These are AI applications used in critical areas like healthcare, education, banking, insurance, law enforcement, and other public services.
The Act sets strict rules for high-risk AI systems, like using high-quality datasets, applying risk-mitigation measures, and implementing human oversight.
On the other hand, the law does not apply to the law-risk AI systems, such as spam filters or AI used in non-critical domains.
Like the EU's General Data Protection Regulation (GDPR), the Act applies extraterritorially to providers from outside the EU if they have users within the EU. This means that if an entity is registered in the USA or another country outside the EU, but has users within the EU, it must comply with the AI Act as well.
Key Provisions of the AI Act
The AI Act comprises the following features:
- Risk-based approach. The Act classifies AI applications by their risk of causing harm to society. Under this approach, AI applications in sensitive areas like healthcare, banking, insurance, and law enforcement will undergo careful assessment to ensure they meet high safety and ethical standards. There are four levels of potential risk for using AI: unacceptable risk, high risk, limited risk, and minimal risk. The regulation of AI systems depends on the risk level.
- Transparency and accountability. The Act requires entities to inform individuals when they are interacting with AI, particularly in critical sectors such as healthcare, employment, law enforcement, finance, and insurance. This includes providing clear information about the AI and data used, the logic behind AI decisions, and the potential impact on users. For example, if a chatbot is using AI, users should know it.
- Human oversight. The AI Act mandates high-risk AI systems to implement human oversight mechanisms to intervene and override AI decisions when necessary for human safety. This provision aims to prevent harmful or unintended consequences resulting from AI operations, especially in critical areas.
- Data governance. The AI Act emphasizes data governance. High-risk AI systems must use high-quality datasets that are tested, free from biases and inaccuracies. This helps mitigate the risk of discriminatory outcomes and ensures that AI systems make safe, ethical, fair and accurate decisions.
- Market surveillance and enforcement. To ensure compliance with the AI Act, the EU aims to set a comprehensive market surveillance and enforcement framework. National authorities will be responsible for monitoring AI applications within their jurisdictions. They will have the right to conduct inspections and impose penalties for non-compliance. The global aim of this framework is to create a culture of accountability among AI developers and deployers within the EU.
- Implications for AI innovation and public confidence. By establishing clear regulatory guidelines, the law provides a roadmap for developers to plan and create safe, ethical, and trustworthy AI systems that are safe and trustworthy by design. This approach is expected to enhance public confidence in AI technologies and their applications across various sectors.
- Fostering innovation. Together with the regulatory requirements, the new law also fosters innovation by creating a stable and predictable regulatory environment. The regulation clearly defines acceptable practices, which reduces uncertainty for developers and investors, encouraging them to pursue innovative AI projects. Moreover, the Act’s risk-based approach sets the regulatory requirements that are proportionate to the potential risks. This means that low-risk AI systems could be developed and implemented without unnecessary restrictions.
- Bans on certain AI applications. The AI Act prohibits the usage of certain high-risk AI applications. This includes real-time biometric identification systems in public spaces, seeking to protect individual rights and freedoms.
Read the blog article to learn if AI can create Pivacy Policy for your website or app. - The law prohibits “unacceptable” applications of AI in terms of their risk level. So-called “social scoring” systems that rank citizens based on aggregation and analysis of their data are prohibited. Predictive policing and emotional recognition in the workplace and schools is also not allowed.
- Setting global standards. The AI Act is the first law in the world to regulate AI applications. Thus, the regulation positions the EU as a global leader in AI regulation, setting a benchmark for other countries to follow. As AI technologies continue to evolve, more and more countries are supposed to implement similar laws regulating AI systems. The principles of the AI Act can set an example of safety, transparency, non-discrimination, and human oversight for international regulatory efforts.
Risk-Based Approach of the AI Act
The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the perceived threats they pose to society. There are four levels of potential risk for using AI applications, plus an additional category for general-purpose AI:
- Unacceptable risk.
- High risk.
- General-purpose and generative AI
- Limited risk.
- Minimal risk.
Unacceptable risk
AI applications that are categorized as causing an unacceptable risk will be prohibited under the EU AI Act. These applications are considered harmful to people. They comprise:
- AI technologies that manipulate human behavior, especially targeting specific groups or vulnerable populations. For example, it could be voice-activated toys designed to encourage unsafe behaviors in children.
- AI technologies that implement social scoring, which involves evaluating individuals based on their personal characteristics, socio-economic status or behavior.
- AI technologies that involve biometric identification and classification, such as technologies to recognize and categorize people based on their biometric data.
- AI technologies that use real-time remote biometric identification, such as facial recognition, in public spaces.
However, the Act presents some provisions for certain exceptions, especially for law enforcement purposes and serious criminal offenses. Real-time remote biometric identification can be used specifically for grave cases. Post-event remote biometric identification, which is used after a delay, is also allowed for investigating serious criminal offenses, but it needs prior judicial authorization.
High risk
AI technologies that are expected to cause significant threats to health, safety, or the fundamental human rights are categorized as high risk under the EU AI Act. They are subject to quality, transparency, human oversight, and safety obligations.
Every AI technology classified as high risk must undergo a thorough evaluation process before entering the market. Their performance and compliance with regulations will be continually monitored throughout their operational life.
These high-risk AI technologies consist of two distinct groups:
1. AI technologies integrated into products that are subject to the EU’s product safety laws. This includes many products, like automobiles, aviation-related items, medical equipment, elevators, and toys.
2. AI technologies that operate in specific critical and sensitive sectors, including:
- The management and functioning of essential infrastructure.
- The educational sector, including both general education and vocational training.
- Employment-related areas, including worker management and self-employment opportunities.
- Access to important private and public services and public benefits.
- Law enforcement.
- The management of migration, asylum, and border control technologies.
- Support in the interpretation and application of legal matters.
The application of these high-risk AI technologies must be registered in a dedicated EU database.
General-purpose and generative AI
This category was added in 2023 and includes general-purpose and generative AI models like ChatGPT. They are subject to transparency requirements. These requirements include:
- Users must be explicitly informed that the content they are interacting with has been generated by a generative AI technology.
- The design of these AI technologies must take measures to prevent the creation of illegal content.
- Owners of such technologies must provide summaries of copyrighted data that have been used in the training of these AI models.
- More advanced AI models that have a significant impact, such as GPT-4, are required to undergo extensive evaluations. In the case of any serious incidents arising from these technologies, Owners of such technologies must report these incidents to the European Commission.
The European Commission aims to monitor and regulate general-purpose and generative AI technologies that could potentially pose systemic risks.
Limited risk
AI technologies that are grouped into this category have certain transparency obligations. Users, using such AI systems, must be informed about it and they should make informed choices about their continued use of these applications.
Limited risk AI technologies include AI-generated or manipulated content like images, audio, or video, including deepfake technology.
However, free AI models that are open source (i.e., whose parameters are publicly available) are not regulated, with some exceptions.
Minimal risk
AI technologies like spam filters and video games are considered to have minimal risk. Therefore, such technologies are not regulated. EU Member States cannot impose additional regulations due to common regulatory rules.
A voluntary code of conduct is suggested for these AI systems.
Most AI applications are expected to fall into this category.
The Artificial Intelligence Office
The European Commission has created the Artificial Intelligence Office (AI Office), established within the Commission to promote national cooperation and ensure compliance with the regulation. The European AI Office will be the center of AI expertise across the EU, playing a key role in implementing the AI Act, especially in relation to general-purpose AI models.
The European AI Office will support the development and use of safe and trustworthy AI while protecting against AI risks.
The AI Office will provide a comprehensive legal framework on AI worldwide, guaranteeing the health, safety and fundamental rights of people, and provide legal landmarks to businesses across all 27 Member States of the EU.
The European AI office consists of 5 units and 2 advisors, including:
- The Excellence in AI and Robotics unit.
- The Regulation and Compliance unit.
- The AI Safety unit.
- The AI Innovation and Policy Coordination unit.
- The AI for Societal Good unit.
- The Lead Scientific Advisor.
- The Advisor for International Affairs.
In January 2024, the Commission launched an AI innovation package to support startups and SMEs in developing reliable AI that complies with EU values and rules. Together with the GenAI4EU initiative, the AI office takes part in this package. They are expected to foster novel use cases and emerging applications in Europe's 14 industrial ecosystems and the public sector. Application areas include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds.
How to Comply for High-Risk AI Technologies?
The EU AI Act provides a comprehensive set of compliance measures for AI technologies considered as high-risk, covering various stages from design and implementation to post-market introduction. These regulations include:
- Implementation of a Risk Management System.
- Requirements for data handling and governance.
- Preparation and maintenance of Detailed Technical Documentation.
- Obligations for record-keeping.
- Ensuring transparency and clear information dissemination to users.
- Guaranteeing human oversight.
- Upholding standards for accuracy, robustness, and cybersecurity.
- Establishment of a Quality Management System.
- Conducting a Fundamental Rights Impact Assessment.
While AI technologies classified as having limited risk are not subjected to the same compliance checks, they are still evaluated based on similar criteria to ensure they meet the necessary transparency and safety standards.
Penalties for Non-Compliance with the AI Act
In case of a breach of the AI Act, the EU Commission will have the power to fine companies by as much as 35 million euros or 7% of their annual global revenues, whichever is higher.
Frequently Asked Questions
What is the Artificial Intelligence Act?
The Artificial Intelligence Act (AI Act) is a European Union (EU) law regulating the usage of artificial intelligence (AI). It establishes a common regulatory and legal framework for AI technologies within the EU. The AI Act will go into effect on 1 August 2024, with provisions coming into force gradually over the following 6 to 36 months. Read CookieScript privacy laws to be updated.
What are the penalties for non-compliance with the AI Act?
In case of the breach of the AI Act, the EU Commission will have the power to fine companies by as much as 35 million euros or 7% of their annual global revenues, whichever is higher. Use CookieScript CMP to comply with the GDPR and get the latest news about the AI Act.
How to comply with high-risk AI technologies?
The EU AI Act provides a comprehensive set of compliance measures for AI technologies considered as high-risk. Read CookieScript privacy laws to get the final list of requirements and be updated.