Breaking down data rules from around the globe

Privacy laws

The New Kids Online Safety Act Kosa Rules

Age Gating vs. Age Assurance: The New Kids Online Safety Act (KOSA) Rules

The Kids Online Safety Act (KOSA) enhances protection for minors by requiring online platforms to prevent harmful content and to shift from self-attested age gating to robust age assurance that requires age verification.

Online children’s privacy is certainly an important issue. Restricting children’s access to age-inappropriate content is nothing new; it has been standard practice for decades.

In 2026, protecting children’s privacy raises new challenges. As bullying, grooming, pornography, and other inappropriate content proliferate online, children are at risk of serious harm. Society puts pressure on implementing robust age assurance in digital environments.

The legislative landscape for age assurance across North America, Europe, and other regions is shifting. Simple age gating (self-attestation) is no longer sufficient. Age assurance should be used to cope with the problem.

Regulators are responding to these issues through evolving regulatory efforts that touch on technical and policy questions.

What Is the Kids Online Safety Act (KOSA)?

The Kids Online Safety Act (KOSA) is proposed U.S. legislation aimed at protecting minors online by requiring social media platforms to prevent minors online. It introduces the concept of “Duty of Care” and parental control, and requires platforms to mitigate risks like addiction, exploitation, and harmful content.

Key goals of KOSA include:

  • Duty of Care
    Platforms must take reasonable steps to actively prevent harm to users under 18, such as bullying, sexual exploitation, and promotion of self-harm, eating disorders, or substance abuse. Social media, gaming, and apps must prevent risks, and not just react when harmful or addictive content reaches minors.
  • Parental control
    Parents should have the possibility to monitor and manage their children's online behavior and report harmful content.
  • Safety by default
    Online platforms are required to enable the highest privacy and safety settings for minors by default.
  • Disabling addictive features
    Social media companies must provide options for minors to opt out of algorithmic recommendations.
  • Transparency and audits
    Large social media platforms must ensure transparency by performing annual, independent audits to assess their impact on minors.

 

KOSA indirectly forces platforms to evaluate the age of users and determine if a user is a minor or not. Platforms therefore need age verification tools.

Simple age gating (self-attestation) is not sufficient anymore. KOSA requires implementing age assurance methods, when users can prove their age.

Who Needs to Comply With KOSA?

KOSA applies to all platforms and services that could be accessed by minors.

In particular, KOSA applies for different online platforms and services, including:

  • Social media giants
  • Any other social media platforms
  • Online communities or forums
  • Gaming platforms
  • Streaming services
  • Marketplaces or apps with user-generated content.

 

Even if your product isn’t intended for kids, that doesn’t automatically exclude you from the KOSA regulation. If your product or service could be accessed by children under 18, you must comply with KOSA.

While KOSA enforcement focuses mainly on larger platforms, smaller companies aren’t excluded from the compliance. There is no threshold on the number of users or the revenue.

Why KOSA Changes How Websites Handle Age Verification

Age gating relied on self-attesting age, so minors could still access inappropriate content. With new KOSA requirements, platforms are now accountable for harm to minors. Thus, platforms must implement reliable measures to determine user age and protect users’ safety and privacy online.

Simple age gating was easy to bypass.

Typically, a platform asks whether a user is 18 years old. The user could simply chick Yes and enter the site.

Age assurance means platforms must reasonably demonstrate they attempted to identify minors.

Platforms are now accountable for harm to minors. Regulators expect proactive risk reduction.

By 2026, the legal Safety by Design movement has achieved its results:

  • Liability
    If a minor accesses harmful content, a simple "I am 18+" could no longer be used to check the user’s age.
  • App store accountability
    Apple and Google now provide age-related signals from the devices to apps. So, the app could know a minor is using it.  There is no need to check the user's ID.
  • Highly Effective Age Assurance (HEAA)
    In 2026, it became the new gold standard for blocking harmful content like pornography or addictive algorithmic feeds. It requires an accuracy rate of 99%+ in blocking minors.

 

To ensure the Duty of Care principle, platforms must reliably verify user age. They should not necessarily perform full identity verification, but at least take reasonable steps to determine a user’s age. A single click to confirm the age is no longer enough.

In 2026, regulators largely abandoned age gating in favor of more robust age assurance frameworks.

Age Gating vs. Age Assurance: What’s the Difference?

Age gating relies on user self-declaration, so it is very easy to bypass. Age assurance requires evidence of a user’s age (ID verification, face scans, etc.), making it harder to bypass. Despite the reliability, age assurance also has drawbacks: it is more complex to implement, and it collects a lot of personal data, raising potential privacy concerns.

Age gating relies on user self-declaration. A user must select a birth date or click "I am 18+."

Age gating is easy to implement, has minimal friction on users, and a platform doesn’t collect personal data.

However, age gating has drawbacks:

  • It is very easy to bypass.
  • It doesn’t offer sufficient legal protection under stricter data privacy laws.
  • Simple age gating is non-compliant for high-risk content.

 

Age assurance, on the other hand, is much harder to bypass, making it a much more reliable method to determine user age. It aligns better with regulatory requirements and expectations.

Users must provide evidence to prove their age, including:

  • ID verification
  • Face scans
  • Third-party verification services (e.g., bank data)
  • Behavioral or device-based signals.

 

Age assurance also has drawbacks:

  • It is more complex to implement.
  • It collects a lot of personal data that could be leaked.
  • It introduces friction for users and decreases platform satisfaction.

 

Thus, the main difference between age assurance vs age gating lies in whether a user uses self-declaration of their age or a platform requires a much more reliable method to determine user age.

When selecting between age gating vs age assurance, you should consider content, risk level, and target audience.

In 2026, age assurance is required for social media, gambling, and adult sites.

Age Assurance Techniques

Platforms are seeking reliable age assurance techniques to meet emerging legal standards.

In 2026, KOSA age verification requires platforms and apps to assess users’ age. The most often used age assurance techniques include:

  • ID verification
    This is the most obvious method to prove the age. However, it has a disadvantage in that it collects sensitive personal data and raises privacy issues.
  • Facial age estimation
    AI scans a user's face to estimate their age range without revealing their identity.
  • Third-party verification
    Users must provide proof from third-party services to confirm their age. For example, a bank statement could prove a user’s ages.
  • Behavioral or device-based signals
    Platforms analyze user behavior to evaluate user age based on their behavioral signals. This method allows apps to perform age assurance without revealing their identity.
  • AI age estimation tools
    Some platforms use AI age estimation tools to determine users’ age.
  • Parental control
    Technologies on Apple, Android, and gaming consoles connect children’s accounts with parent’s accounts, so parents could set restrictions on apps, purchases, and communications.

 

Experts suggest using layered approach to reach KOSA compliance age verification, platforms should combine various age assurance techniques to enhance ager verification while managing privacy risks. A layered approach allows privacy-friendly age verification: users can perform age verification without ID.

Age assurance techniques have the following benefits:

  • Age-appropriate content
    These techniques ensure minors can enjoy appropriate content while reducing exposure to harmful content and privacy risks.
  • Compliance & safety
    Many platforms, including Google and Apple, use these methods to provide age-appropriate, safer experiences as demanded by new digital regulations.

Age Gating or Age Assurance: Which Should You Choose?

There are no strict requirements for when to use age gating and when to use age assurance. Use age gating when your content is low-risk, you’re not directly targeting minors, and your product has minimal harm exposure. Use age assurance when your platform includes sensitive or addictive content, you have a significant under-18 audience, and your platform allows uploading user-generated content.

The Kids Online Safety Act sets age-verification requirements for companies to verify users’ ages.

In general, using age gating or age assurance depends on your content, risk level, and target audience.

Age assurance aligns better with regulatory requirements, but it is more difficult to implement and raises additional privacy issues.

Use age gating in these cases:

  • Your content is low-risk.
  • You’re not directly targeting minors.
  • Your product has minimal harm exposure.

 

However, don’t blindly rely on age gating. Evaluate gating and content regularly to avoid problems.

Use age assurance when:

  • Your platform includes sensitive or addictive content.
  • You have a significant under-18 audience.
  • Your platform allows uploading user-generated content.
  • You want a stronger legal defense to prevent risk.

What Counts as “Reasonable Measures” Under KOSA?

KOSA counts your effort to evaluate user age as a reasonable measure when you assessed your risk, you implemented proportionate safeguards, you minimized harmful exposure, and you avoided dark patterns.

KOSA doesn’t provide detailed instructions on when to use age gating vs. age assurance. Instead, it uses a flexible standard by introducing reasonable measures.

Reasonable measures allow regulators to evaluate context to decide whether your platform or app complies with KOSA. This will help to eliminate KOSA compliance risks.

KOSA counts your effort to evaluate user age as reasonable measures when:

  • You assessed your risk
    To comply with KOSA, you need to evaluate whether minors use your platform and what kind of harm could possibly occur.
  • You implemented proportionate safeguards
    After risk assessment, you should implement adequate safety measures. Implement basic gating for low-risk content and stronger age assurance for higher-risk content.
  • You set mechanisms to minimize harmful exposure
    Another step is to minimize harmful exposure. Limit certain features for minors, adjust algorithms to disable addictive features, and add safety controls for minors.
  • You avoid dark patterns and manipulative design
    Don’t use prohibited techniques such as dark patterns, manipulative UX targeting kids, and other techniques that force engagement without regard for safety.

 

Note: Make sure you can explain your decisions and the proportionality of safeguards. Document your risk assessment activities and other decisions about the prevention of harmful content. You must be able to demonstrate proportional actions and outcomes.

Use a Consent Management Platform (CMP) to provide a Cookie Banner, collect user consent, and comply with KOSA.

CookieScript CMP has the following features:

 

It also offers a 14-day free trial.

Frequently Asked Questions

What is the Kids Online Safety Act (KOSA)?

The Kids Online Safety Act (KOSA) is proposed U.S. legislation aimed at protecting minors online by requiring social media platforms to prevent minors from accessing content. It introduces the concept of “Duty of Care” and parental control, and requires platforms to mitigate risks like addiction, exploitation, and harmful content. Use a CMP like CookieScript to implement a Cookie Banner, store user consent, and comply with KOSA.

What counts as “reasonable measures” under KOSA?

KOSA counts your effort to evaluate user age as reasonable measures when you assessed your risk, you implemented proportionate safeguards, you minimized harmful exposure, and you avoided dark patterns. Make sure you can explain your decisions and the proportionality of safeguards. Use a CMP like CookieScript to implement a Cookie Banner and store user consent.

What is the main goal of the Kids Online Safety Act (KOSA)?

The main goal of KOSA is to protect minors from harmful online exposure by placing a duty of care on digital platforms. This means companies must actively reduce risks like exposure to harmful content, addictive features, and online exploitation, rather than simply reacting after harm occurs.

Who needs to comply with KOSA?

KOSA applies to all platforms and services acessible by minors under 18, including social media giants and other social media platforms, online communities or forums, gaming platforms, streaming services, and marketplaces or apps with user-generated content. Use a CMP like CookieScript to implement a Cookie Banner, store user consent, and comply with KOSA.

What’s the difference between age gating and age assurance?

Age gating relies on user self-declaration, like checking “I am 18+”. Age assurance requires providing evidence of a user’s age, including ID verification, face scans, third-party verification services, and behavioral or device-based signals. In 2026, age assurance is required for social media, gambling, and adult sites, since it’s harder to bypass and is more reliable.

Does KOSA require websites to verify users’ ages?

KOSA does not explicitly mandate strict age verification, but it strongly encourages platforms to take “reasonable measures” to identify and protect minors. In practice, this means that higher-risk platforms and apps must implement reliable age assurance methods, such as ID verification, face scans, and third-party verification services to comply with KOSA. Simply checking “I am 18+” is no longer enough.

What happens if a company fails to comply with KOSA?

If a company fails to meet KOSA requirements, it could face regulatory enforcement, fines, and legal action from authorities like the Federal Trade Commission (FTC). It could also lead to reputational damage and loss of user trust. Use a CMP like CookieScript to implement a cookie banner, store user consent, and comply with KOSA.

Is age gating enough for KOSA compliance?

No, age gating alone is usually not enough for KOSA compliance. Basic age gating relies entirely on self-declaration and could be easily bypassed. KOSA requires age assurance methods, such as ID verification, face scans, and third-party verification services to prove users’ ages. Platforms and apps must take “reasonable measures” to prevent harm to minors.

New to CookieScript?

CookieScript helps to make the website ePrivacy and GDPR compliant.

We have all the necessary tools to comply with the latest privacy policy regulations: third-party script management, consent recording, monthly website scans, automatic cookie categorization, cookie declaration automatic update, translations to 34 languages, and much more.