EU AI Act Checklist for Websites
ON THIS PAGE
- What Is the EU AI Act and Why It Does It Matter for Websites
- EU AI Act Risk Categories Explained
- Who Must Comply With the EU AI Act (And the Exemptions)
- Does the EU AI Act Apply to My Website?
- Core Compliance Requirements for Websites
- Website-Specific Use Cases for The EU AI Act
- Required Documentation for Websites Under the EU AI Act
- The EU AI Act Enforcement and Penalties
- EU AI Act Checklist for Websites
- Frequently Asked Questions
The EU AI Act introduces new obligations to businesses located within the EU and businesses that target European citizens.
Read this blog post to understand the scope and definition of the EU AI Act and achieve compliance with the Act.
What Is the EU AI Act and Why It Does It Matter for Websites
The EU AI Act is the European Union’s law regulating the use of artificial intelligence. Unlike sector-specific rules, it takes a horizontal approach, meaning it applies across industries— including websites, apps, and online platforms.
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence.
The Act entered into force on August 1, 2024; however, its rules are implemented in phases. August 2, 2026, is the full application date of the Act, when most of the rules become enforceable.
If your website uses AI to perform some of your website’s core functionalities, for example, chatbots, personalization, automated decisions, content generation, fraud detection, the EU AI Act may apply to you.
The EU AI Act is different from other laws in that it uses a risk-based approach. Instead of banning AI outright, the EU AI Act categorizes AI systems based on how much harm they could cause to individuals. The stricter the potential risk of AI, the heavier the regulation.
EU AI Act Risk Categories Explained
The EU AI Act classifies all AI systems operating in the EU into four main risk levels:
- Unacceptable risk
- High risk
- Limited risk
- Minimal risk.
1. Unacceptable Risk
AI systems that exploit individual vulnerabilities and could cause huge privacy concerns. These include social scoring, real-time facial recognition in public, subliminal manipulation, etc.
AI systems with unacceptable risk have been banned since February 2025.
2. High Risk
AI systems that affect fundamental human rights, access to services, or legal outcomes.
These include:
- Automated creditworthiness checks
- Biometric identification
- Law enforcement
- AI used in hiring or eligibility decisions
- Education grading
- Critical infrastructure.
High-risk AI has the strictest compliance requirements. Full compliance must be reached by August 2026.
3. Limited Risk
AI systems that interact directly with users and may influence user behavior. This AI risk level category is especially relevant for websites.
Examples include:
- Chatbots (like ChatGPT)
- AI content generators
- Emotion-recognition features
- Deepfakes.
The EU AI Act requires transparency. Websites must disclose they are using such AI systems.
4. Minimal Risk
Low-impact AI, such as spam filters, basic optimization tools, or AI-powered video games.
These are largely unregulated and free to use, but still encouraged to follow voluntary industry codes of conduct.
General-Purpose AI (GPAI) models like GPT-4 or Gemini have their own rules. Since August 2025, they have been required to document their training data and respect EU copyright laws.
Who Must Comply With the EU AI Act (And the Exemptions)
The EU AI Act has a broad territorial reach, similar to the GDPR.
You must comply with the EU AI Act if you:
- Are established in the EU and use AI on your website.
- Are based outside the EU but offer services to individuals in the EU.
- Use AI outputs that affect individuals located in the EU.
This includes:
- SaaS platforms with AI features.
- E-commerce sites using AI recommendations.
- Websites with AI chatbots or automated moderation.
- Publishers using AI-generated content at scale.
Who Is Exempt From The EU AI Act?
Some AI platforms or uses of AI may be exempt from the EU AI Act, including:
- Minimal-risk AI that does not affect user rights.
- Purely internal AI systems with no external impact.
- AI used exclusively for personal, non-commercial purposes.
Does the EU AI Act Apply to My Website?
The EU AI Act doesn't regulate websites, but rather the AI systems integrated into them. Whether the Act applies to your website, it depends on what your website does.
If your website serves users in the European Union and uses AI for automation, decision, recommendation, or influential tasks for users, the Act likely applies to your business.
The EU AI Act applies to websites if your website uses these AI systems:
- AI chatbots or virtual assistants answering user questions
You must clearly disclose to users that they are talking to an AI. - AI for content generation (blogs, images)
If the content of your website is generated by AI and not significantly reviewed by a human, it must be labeled as AI-generated in a machine-readable format. - User profiling and recommendations
Basic recommendation engines (like "people also bought/ viewed") are generally considered as Minimal risk AI. However, when profiling affects fundamental rights, such as hiring, insurance pricing or creditworthiness checks, AI is classified as High Risk AI. - Automated scoring, ranking, or profiling
If your site uses AI to filter resumes, score, profile, or rank candidates, it is classified as High Risk AI and faces the strictest requirements. - AI-based fraud, security, or moderation tools
If your site uses AI tools recognize real persons or differentiate between humans and robots. - AI-generated content
If you use AI-generated images, text, or video on your site, the Act applies to you.
Core Compliance Requirements for Websites
While obligations vary by risk level, falling into four categories, several requirements are universal and especially relevant for websites:
- Transparency
Websites must inform users that they are interacting with AI or that the content is AI-generated. - Human oversight
AI systems cannot operate independently. Humans must regularly monitor outputs, intervene when needed, and must be able to override automated decisions. - Data governance
Businesses must pay special attention to ensure the relevant data quality when working with AI systems. Data used for AI training must be:
Relevant and representative.
Free from known bias where possible.
Documented clearly. - Accuracy, robustness, and security
Businesses must only use AI systems that are reliable and secure. Website owners must ensure that AI tools are protected against misuse or manipulation.
For websites, data governance, transparency, security, and other requirements overlap with existing privacy and security practices, but the EU AI Act emphasizes the use of AI.
Website-Specific Use Cases for The EU AI Act
In 2026, the EU AI Act treats websites depending on the specific function the AI performs on your site.
Below are real website scenarios categorized by their corresponding risk levels.
E-commerce & retail websites
Many online stores use AI to improve their shopping experience by providing product recommendations or helping search for products. These are generally low-impact risks under the Act.
- Chatbots
E-commerce stores that use AI chatbots to help customers find products or answer pricing, delivery, and other product-related questions, have a transparency obligation. You must clearly disclose to the user that they are interacting with an AI. This is usually done via a disclaimer in the chat window. Chatbots are classified as having a limited risk. - Product recommendations
The Act generally does not regulate standard engines that show product recommendations such as "You might also like..." based on browsing history or purchased products. - Dynamic Pricing
Adjusting prices based on demand or inventory is generally allowed. However, using AI to change prices targeting vulnerable individuals (e.g., in an emergency) is prohibited. These AI tools are classified as having a minimal risk.
Media, blogs, & news sites
The focus here is on preventing user manipulation.
- AI-Generated Articles
If your site publishes news or other information for the general public created by AI, you must label it as AI-generated content unless it has undergone significant human overview. AI tools for content generation are classified as having a limited risk. - Deepfakes & synthetic data
If your site hosts AI-generated images or videos that look like real people or events, you must clearly label them as AI-generated images or videos to prevent deception. Such AI tools have a limited risk. - AI translation tools
Using AI to translate your website into other languages is generally allowed and does not require special labeling. Such AI tools have a minimal risk.
Recruitment and profiling
This is one of the most strictly regulated areas. If your website is used for recruitment and scores and evaluates candidates, you are most probably using high-risk AI tools.
AI-generated analysis and profiling may trigger both AI Act and GDPR obligations; thus, in many cases, you will need to obtain user consent and must honor user rights.
- CV screening & candidate evaluation
The EU AI Act strictly regulates using AI to filter resumes and score or rank candidates. By August 2026, you must have a human that reviews decisions made by AI, and register the system in an EU database.
Such AI tools for candidate evaluation have a high risk.
Educational & training platforms
Like recruitment, this is also a heavily regulated area that falls into the high-risk zone.
- Online teaching and examination
If your site hosts online courses and uses AI to detect cheating via webcam during examination, it is a high risk. You must ensure the AI doesn't discriminate based on race or disability.
Such AI tools have a high risk. - Admissions processing
Using AI to decide which students will be accepted into a school via your website is also strictly regulated by the Act. You must ensure that a human reviews the processing and decisions made by AI.
Such AI tools have a high risk.
Banking, FinTech, & Insurance
Financial websites and insurance are heavily regulated by the Act since AI can deny people essential services.
- Credit Scoring
Using AI to determine if a website visitor is eligible for a loan or a credit card has a high Risk. You have transparency obligations, so that AI decision making could be explained to the user. You must also ensure the data used by AI is unbiased.
Such AI tools have a high risk. - Insurance Risk Assessment
AI systems that make insurance risk assessments for life and health insurance on your website are high-risk. AI tools for standard car or home insurance are currently less regulated but still subject to strict data privacy rules. - Fraud detection
AI that runs in the background with the purpose of detecting credit card fraud or bot attacks is generally less regulated by the Act. Such AI tools have a minimal risk.
If you are a website owner in early 2026, you should categorize AI tools your website uses and get ready for compliance by August 2, 2026. Every high-risk system must be thoroughly evaluated and fully compliant. Every limited risk system must be clearly labeled by August 2, 2026.
Required Documentation for Websites Under the EU AI Act
If your website uses AI tools, you must comply with the EU AI Act. Even if you rely on third-party AI tools, you should be able to demonstrate due diligence. You must be able to answer how AI works on your website.
Documentation is essential to demonstrate compliance. You must disclose all AI tools used and their working principles.
When documenting AI tools, take into consideration these aspects:
- Technical descriptions of AI systems.
- Risk assessments and mitigation measures.
- Training data summaries.
- Human oversight practices.
- Vendor compliance documentation.
The EU AI Act Enforcement and Penalties
For the most severe breaches, non-compliance with the EU AI Act may lead to fines up to €35 million or 7% of global annual turnover, whichever is higher.
Penalties for non-compliance are risk-based and depend on:
- The type of violation.
- The risk category of AI.
- Whether the failure was intentional or negligent.
Penalties based on the risk category of AI:
- Prohibited AI practices
Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. - General Purpose AI (GPAI) violations
Up to €15 million or 3% of global annual turnover, whichever is higher. - High-risk AI system requirements (e.g. data governance, transparency)
Up to €15 million or 3% of global annual turnover, whichever is higher. - Incorrect/misleading information to authorities
Up to €7.5 million or 1% of global annual turnover, whichever is higher.
For websites having many customers and huge turnovers, enforcement can quickly become a heavy load. However, fines for Small and Medium-sized Enterprises (SMEs) and startups are limited at the lower end of the thresholds, ensuring they aren't devastating SMEs and startups.
Penalties are set by EU member states, with annual reporting to the Commission. Penalties must be effective, proportionate, and dissuasive.
EU AI Act Checklist for Websites
While prohibited practices are already enforced, other AI-related requirements, such as transparency rules for chatbots and high-risk system requirements, become fully enforceable August 2, 2026. Get ready for the EU AI Act now.
Use this checklist to check your website's AI compliance before the deadline:
- Identify all AI systems used on your website
First, you should know exactly what AI systems are running on your website. It also relates to your third-party partners and vendors. - Understand your risk level
The EU AI Act sets four risk tiers and the requirements of AI systems based on these tiers vary greatly. In addition, extend your AI risk categorization to the third-party vendors you work with. - Register high‑risk systems in the EU database
Before placing a product on the market, providers must register Annex III high‑risk systems in the EU database (Article 49/71). - Establish AI risk management
Regardless of your risk level, you should establish and maintain a robust AI risk management system. The system could include technical measures for risk mitigation such as risk management software, automation and even AI itself, and human-related measures. - Add clear AI transparency notices
Transparency and other principles of the Act require to notify users that they are communicating with AI or the content was generated by AI. Make sure users can easily understand this. - Obtain the right technical documentation
Document your organization’s information security and integrate into your business processes. - Establish data governance
Data governance is essential to ensure the confidentiality, availability and integrity of the information businesses manage. Ensure the quality of your data for training, validation, and testing of AI systems. Also make sure you have user consent to use their data for AI training. - Implement human oversight
All AI outcomes, especially the ones that fall into higher risk categories, must be reviewed by a human. Make sure regular human oversight activities are well documented. - Perform Fundamental Rights Impact Assessments (FRIAs) when required
Public bodies and private entities providing public services must perform a FRIA before providing high‑risk systems. The need for FRIA depends on the data category (e.g., certain banking/insurance uses). - Establish internal monitoring processes
Privacy rules change, setting new requirements or updating existing ones. Embrace continuous compliance with the Act to establish appropriate levels of cybersecurity, data accuracy and robustness. - Align AI disclosures with GDPR and consent mechanisms
Many requirements of the EU AI Act and the GDPR overlap, such as transparency, data privacy, security, and consent requirements to use personal data. Implement consent mechanisms such as Consent Management Platform (CMP) to obtain and store user consent to use their data for AI system training.
CookieScript CMP is a professional CMP that can help you reach the EU AI Act compliance. It has the following features:
- Compliant cookie banners
- Cookie banner customization
- Integrations with CMS platforms like WordPress, Shopify, Joomla, etc.
- Google Consent Mode v2 integration
- IAB TCF v2.2 integration
- Google Tag Manager integration
- Certification by Google
- CookieScript API
- Cookie Scanner
- Consent recordings
- Third-party cookie blocking
- Geo-targeting
- Local storage and session storage scanning
Frequently Asked Questions
When does the EU AI Act come into effect?
The Act entered into force on August 1, 2024; however, its rules are implemented in phases. August 2, 2026, is the full application date of the Act, when most of the rules become enforceable.
What are the risk categories defined by the EU AI Act?
The EU AI Act classifies all AI systems operating in the EU into four main risk levels: unacceptable risk, high risk, limited risk, and minimal risk. There are different requirements for AI systems, based on the category, and also different penalties for violating the Act.
What AI systems fall into the high-risk category?
High-risk AI systems are AI systems that affect fundamental human rights, access to services, or legal outcomes. These include automated creditworthiness checks, biometric identification, law enforcement, AI used in hiring or eligibility decisions, education grading, and critical infrastructure.
What are the penalties of the EU AI Act?
Penalties for non-compliance with the Act are risk-based. The prohibited AI practices may lead to fines up to €35 million or 7% of global annual turnover, whichever is higher. Penalties for general-purpose AI violations may reach up to €15 million or 3% of global annual turnover; for high-risk AI system requirements - up to €15 million or 3% of global annual turnover, and for incorrect/misleading information to authorities- up to €7.5 million or 1% of global annual turnover, whichever is higher. Use CookieScript CMP to comply with the Act and avoid penalties.
How to comply with the EU AI Act?
To comply with the EU AI Act, identify all AI systems used on your website, understand your risk level, register high‑risk systems in the EU database, establish AI risk management, add clear AI transparency notices, obtain the right technical documentation, establish data governance, and align AI disclosures with GDPR and consent mechanisms. Use CookieScript CMP to notify users and obtain user consent.