Skip to Content

15 Things the EU AI Act Bans in 2026 (That You Might Not Know About)

A comprehensive guide to prohibited AI practices under Europe's groundbreaking AI regulation

What is the EU AI Act?

The European Union's Artificial Intelligence Act, which entered into force in August 2024 and became fully applicable in 2026, represents the world's first comprehensive legal framework for artificial intelligence. This landmark regulation categorizes AI systems by risk level and establishes strict prohibitions on applications deemed unacceptable threats to fundamental rights and human dignity.

While many know about high-profile bans like social scoring, the EU AI Act contains numerous lesser-known prohibitions that catch many developers and organizations off guard. Understanding these restrictions is crucial for anyone operating in the European market—violations can result in fines up to €35 million or 7% of global annual turnover, whichever is higher.

"The AI Act is not just about banning dangerous systems—it's about establishing a human-centric approach to AI that protects fundamental rights while fostering innovation. Many of the prohibitions address subtle but significant threats that most people haven't considered."

Dr. Sarah Chander, Senior Policy Advisor, European Digital Rights (EDRi)

According to the European Commission, these bans apply to all AI systems placed on the EU market or whose output is used in the EU, regardless of where the provider is located.

Understanding the Ban Categories

Before diving into specific prohibitions, it's important to understand that the EU AI Act categorizes banned practices into several groups based on the fundamental rights they threaten. The regulation takes a risk-based approach, with prohibited practices representing "unacceptable risk" that cannot be mitigated through compliance measures.

These bans are absolute—there are no exceptions for "responsible use" or "proper safeguards." If your AI system falls under a prohibited category, it cannot be deployed in the EU, period.

1. Subliminal Manipulation Techniques

The EU AI Act prohibits AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort their behavior in a manner that causes or is likely to cause physical or psychological harm. This goes far beyond traditional advertising regulations.

Examples include:

  • AI-powered audio or visual stimuli embedded in content that users cannot consciously perceive
  • Algorithms that exploit neurological vulnerabilities to trigger compulsive behaviors
  • Systems that use brain-computer interfaces to influence decisions without explicit awareness
  • Recommendation engines designed to create addiction through subconscious pattern exploitation

This prohibition extends to marketing, gaming, social media platforms, and any other application where AI might manipulate user behavior below the threshold of conscious awareness. According to the official AI Act text, the key criterion is whether the manipulation occurs outside conscious perception and causes harm.

2. Exploiting Vulnerabilities of Specific Groups

AI systems that exploit vulnerabilities related to age, disability, or socioeconomic situation are strictly prohibited when this exploitation materially distorts behavior and causes harm. This ban is broader than many realize.

Prohibited applications include:

  • AI-targeted advertising that exploits children's developmental limitations to promote harmful products
  • Systems that identify and target elderly users with cognitive decline for financial services
  • Algorithms that detect users with gambling addictions to increase engagement
  • AI that identifies economically vulnerable individuals to promote predatory loans

"What makes this prohibition powerful is its focus on exploitation of vulnerability, not just the vulnerability itself. An AI system serving elderly users isn't banned—but one that identifies and exploits their cognitive limitations to sell unnecessary products absolutely is."

Prof. Michael Veale, Associate Professor of Digital Rights, University College London

3. Social Scoring by Public Authorities

While widely publicized, the social scoring ban has nuances many miss. The EU AI Act prohibits public authorities from deploying AI for general-purpose social scoring that evaluates or classifies people based on their social behavior or personal characteristics, leading to detrimental treatment.

Lesser-known aspects of this ban:

  • It applies to any public authority, including municipal governments, schools, and public healthcare providers
  • Scoring doesn't need to be centralized—distributed systems that aggregate to create social scores are also banned
  • The prohibition covers both direct consequences (denied services) and indirect ones (increased scrutiny)
  • Private companies cannot deploy such systems on behalf of public authorities

According to the European Parliament, this extends to seemingly benign applications like "good citizen" scoring systems that some municipalities had considered for resource allocation.

4. Real-Time Remote Biometric Identification in Public Spaces

The ban on real-time remote biometric identification (RBI) in publicly accessible spaces by law enforcement is well-known, but the exceptions and scope are frequently misunderstood.

Key points most people miss:

  • "Publicly accessible spaces" includes privately-owned areas open to the public (malls, stadiums, train stations)
  • The ban covers facial recognition, gait analysis, voice recognition, and any biometric identification
  • Limited exceptions exist only for serious crimes (terrorism, trafficking, missing children) with judicial authorization
  • "Real-time" means immediate or near-immediate identification—delayed analysis may still be restricted under other provisions

Even the exceptions are tightly controlled. As noted by Access Now, law enforcement must obtain prior judicial authorization except in duly justified urgent situations, and use must be proportionate and time-limited.

5. Biometric Categorization Based on Sensitive Characteristics

This lesser-known prohibition bans AI systems that categorize people based on biometric data to infer or deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

Critical applications affected:

  • Facial analysis tools claiming to detect sexual orientation from photographs
  • Systems that infer religious beliefs from appearance or dress
  • AI that categorizes individuals by race or ethnicity for any purpose
  • Technology claiming to determine political affiliation from biometric features

There's a narrow exception for law enforcement categorization (like searching for suspects by physical description), but according to European Parliament research, this exception is strictly limited and doesn't apply to inferring sensitive characteristics from biometrics.

6. Emotion Recognition in Workplace and Education

One of the most surprising bans prohibits emotion recognition systems in workplace and educational settings, with very limited exceptions. This catches many HR tech and edtech companies off-guard.

Banned applications include:

  • AI systems that analyze employee facial expressions during meetings or work
  • Classroom monitoring tools that assess student emotional engagement
  • Interview software that evaluates candidate emotions as part of hiring decisions
  • Performance management systems incorporating emotional state analysis

The only exceptions are for medical or safety reasons (like detecting driver fatigue). According to research highlighted by WIRED, emotion recognition technology lacks scientific validity and poses significant risks to privacy and dignity—key reasons for the ban.

"The emotion recognition ban acknowledges what researchers have known for years: these systems don't actually work reliably and create serious risks of discrimination. Facial expressions don't universally map to internal emotional states, yet these tools were being used to make consequential decisions about people's careers and education."

Dr. Kate Crawford, Research Professor, USC Annenberg and Senior Principal Researcher, Microsoft Research

7. Indiscriminate Scraping of Biometric Data

The EU AI Act bans the untargeted scraping of facial images or biometric data from the internet or CCTV footage to create or expand facial recognition databases. This prohibition has major implications for AI training practices.

What this means in practice:

  • Companies cannot scrape social media, websites, or public cameras for faces to train recognition systems
  • Existing databases built through indiscriminate scraping cannot be used or expanded
  • "Publicly available" data doesn't equal "legally usable" under this ban
  • The prohibition applies regardless of whether data is later anonymized

This directly impacts companies like Clearview AI, whose business model relies on scraping billions of images from the internet. The Belgian Data Protection Authority and other EU regulators have already taken enforcement action against such practices.

8. Risk Assessment for Criminal Offenses Based Solely on Profiling

AI systems that assess the risk of individuals committing criminal offenses based solely on profiling or personality traits are prohibited. The "solely" qualifier is crucial but often misunderstood.

This ban prohibits:

  • Predictive policing systems that flag individuals as high-risk based only on demographic or behavioral profiles
  • AI that predicts recidivism risk using only personal characteristics without considering specific evidence
  • Systems that create "potential offender" lists based on pattern matching to criminal profiles
  • Algorithms that assess dangerousness based purely on statistical correlations with offender characteristics

However, risk assessment tools that consider specific evidence and actions alongside profile data may be permissible as high-risk systems with strict requirements. According to Fair Trials, the key distinction is whether the system relies exclusively on profiling or incorporates individual evidence and circumstances.

9. Lie Detection Systems (With Limited Exceptions)

Contrary to popular belief, the EU AI Act doesn't completely ban lie detection AI—but it comes close. These systems are prohibited except when used by law enforcement for investigation or prosecution of criminal offenses, and even then face strict requirements.

Banned applications include:

  • AI lie detectors used in employment screening or interviews
  • Border control systems that assess truthfulness of travelers' statements
  • Insurance claim processing that uses AI to detect deception
  • Educational settings using AI to determine if students are cheating based on behavioral cues

The law enforcement exception is narrow and controversial. As documented by the American Psychological Association, even scientific validity of AI lie detection is highly questionable, making any use ethically fraught.

10. Deep Fakes Without Disclosure

While not an absolute ban, the AI Act requires disclosure when AI generates or manipulates image, audio, or video content that resembles existing persons, objects, places, or events (deep fakes). Failure to disclose is effectively prohibited.

Mandatory disclosure requirements apply to:

  • AI-generated synthetic media depicting real people
  • Manipulated videos that alter what someone said or did
  • Audio deep fakes of real individuals' voices
  • AI-generated images of real places or events that didn't occur

Exceptions exist for artistic, creative, or satirical content, but according to the Council of Europe, these exceptions are narrow and don't apply to political advertising or contexts where deception could cause harm. The disclosure must be clear, prominent, and machine-readable.

11. AI Systems That Manipulate Human Behavior to Circumvent Free Will

Beyond subliminal manipulation, the AI Act bans systems that materially distort human behavior through psychological manipulation, causing significant harm. This prohibition is broader and more subjective than many realize.

Examples include:

  • AI-powered "dark patterns" that manipulate users into unwanted actions
  • Systems that exploit psychological vulnerabilities to override rational decision-making
  • Algorithms designed to create fear, anxiety, or urgency to drive specific behaviors
  • AI that uses personalized psychological profiles to manipulate political opinions

The challenge with this prohibition is determining when persuasion becomes manipulation. As noted by legal scholars in Computer Law & Security Review, the line between legitimate persuasion and prohibited manipulation will likely be defined through case law and regulatory guidance.

12. Predictive Policing Based on Profiling Locations

A subtle but important ban prohibits AI systems that make risk assessments of individuals based on profiling to predict the occurrence or reoccurrence of criminal offenses based on profiling of individuals or groups. This extends to location-based predictive policing.

Prohibited systems include:

  • AI that identifies "high-crime" neighborhoods and flags residents as higher risk
  • Systems that predict where crimes will occur based on demographic profiling of areas
  • Algorithms that allocate police resources based on racial or socioeconomic profiles of locations
  • Tools that create risk scores for individuals based on where they live or frequent

This doesn't ban all predictive policing—systems based on crime patterns, environmental factors, or specific intelligence remain permissible as high-risk applications. According to Amnesty International, the ban targets systems that perpetuate discrimination through profiling rather than evidence-based prediction.

13. AI-Powered Toy Manipulation of Children

A lesser-known provision prohibits AI systems in toys or child-oriented services that manipulate children's behavior in harmful ways. This goes beyond general manipulation bans with child-specific protections.

Banned applications include:

  • AI toys that encourage excessive screen time or addictive behaviors
  • Voice assistants in children's products that manipulate purchasing requests
  • Gaming AI designed to exploit children's developmental vulnerabilities for monetization
  • Educational AI that uses manipulative techniques to extend engagement beyond healthy limits

The standard for "harmful manipulation" is lower for children than adults, recognizing their developmental vulnerability. As documented by UNICEF, AI systems targeting children must prioritize their best interests and development over commercial goals.

14. Certain Immigration and Asylum Decision-Making

While not a complete ban, the AI Act severely restricts AI systems that assist in immigration, asylum, and border control management decisions. Systems that make final decisions without meaningful human oversight are effectively prohibited.

Restricted applications include:

  • AI that automatically rejects visa or asylum applications without human review
  • Systems that assess credibility of asylum claims as sole decision-maker
  • Automated border control decisions based on risk profiling
  • AI that determines immigration status or deportation without human oversight

According to the UN Refugee Agency, these restrictions recognize that immigration and asylum decisions involve fundamental rights that require human judgment, cultural understanding, and individual assessment that AI cannot provide.

15. Biometric Databases Without Proper Legal Basis

The final lesser-known prohibition bans the creation or expansion of biometric databases through untargeted collection without proper legal basis and safeguards. This complements the scraping ban but extends further.

Prohibited practices include:

  • Building facial recognition databases without explicit consent or legal authorization
  • Expanding existing databases through mass collection at events or public spaces
  • Creating biometric databases from CCTV footage without specific legal grounds
  • Sharing or combining biometric databases without proper legal framework

This provision works in conjunction with GDPR requirements but adds AI-specific restrictions. As explained by the European Data Protection Board, biometric data is particularly sensitive, and mass collection for AI training or deployment requires exceptional justification and safeguards.

Implementation Timeline and Enforcement

Understanding when these bans took effect is crucial for compliance:

  1. August 1, 2024: The AI Act entered into force
  2. February 2, 2025: Prohibited AI practices bans became applicable (6 months after entry into force)
  3. August 2, 2026: Full applicability of all provisions (current status as of March 2026)

As of March 2026, all prohibitions are fully enforceable. Companies found deploying banned systems face:

  • Fines up to €35 million or 7% of global annual turnover (whichever is higher)
  • Mandatory system shutdown and remediation
  • Potential criminal liability for individuals in serious cases
  • Reputational damage and loss of market access

According to the European Commission, enforcement is coordinated between national authorities, with the AI Office providing oversight and guidance.

Compliance Best Practices

To ensure your AI systems don't violate these prohibitions:

1. Conduct Prohibited Use Screening

Before developing or deploying any AI system in the EU market:

  • Review all 15 prohibited categories against your use case
  • Document why your system doesn't fall under prohibited practices
  • Consider edge cases and indirect applications
  • Consult legal experts for borderline cases

2. Implement Purpose Limitation

Design AI systems with specific, legitimate purposes that cannot be repurposed for prohibited uses:

  • Build technical constraints preventing prohibited applications
  • Include contractual restrictions in customer agreements
  • Monitor for misuse and terminate access when detected
  • Document intended use cases clearly

3. Establish Ongoing Monitoring

The regulatory landscape continues evolving:

  • Subscribe to updates from the European AI Office
  • Monitor enforcement actions and regulatory guidance
  • Participate in industry working groups for interpretation clarity
  • Update compliance assessments quarterly

4. Train Your Teams

Ensure developers, product managers, and executives understand prohibitions:

  • Conduct regular AI Act training sessions
  • Create internal guidelines and decision trees
  • Establish compliance checkpoints in development processes
  • Empower teams to raise concerns without penalty

"Compliance isn't just about avoiding fines—it's about building AI systems that respect fundamental rights and earn public trust. Organizations that embrace these prohibitions as design principles rather than legal obstacles will be better positioned for long-term success in the AI economy."

Margrethe Vestager, Executive Vice-President, European Commission

Common Misconceptions and Clarifications

"These bans only apply to EU companies"

False. The AI Act has extraterritorial reach similar to GDPR. If your AI system is placed on the EU market or its output is used in the EU, you're subject to these prohibitions regardless of where your company is located.

"Emotion recognition is completely banned"

Not quite. Emotion recognition is banned in workplace and educational settings, with exceptions for medical and safety purposes. Other applications face strict requirements as high-risk systems but aren't prohibited.

"All predictive policing is illegal"

Incorrect. Predictive policing based on profiling individuals or assessing individual risk solely through profiling is banned. Systems that predict crime patterns based on environmental factors, historical data, or specific intelligence remain permissible as high-risk applications.

"Open-source AI models are exempt"

No. The prohibitions apply to AI systems regardless of whether they're proprietary or open-source. However, general-purpose AI models have different requirements than specific applications.

Frequently Asked Questions

What happens if I unknowingly deployed a prohibited system before the ban took effect?

Systems deployed before February 2, 2025, had a grace period, but as of March 2026, all prohibited systems must be shut down. Authorities generally show leniency for good-faith compliance efforts, but continued operation after the deadline risks full penalties.

Can I use prohibited AI systems for research purposes?

Limited exceptions exist for scientific research and development conducted in controlled environments that don't affect individuals' rights. However, deploying such systems outside research settings remains prohibited.

How do I know if my AI system "materially distorts behavior"?

This requires case-by-case assessment. Factors include: degree of influence on decision-making, whether users can resist or are aware of the influence, and whether harm results. When in doubt, consult legal experts and err on the side of caution.

Are there industry-specific exemptions?

Very few. National security and defense have some carve-outs, and medical/safety applications may justify otherwise prohibited uses (like emotion recognition for driver fatigue detection). Commercial interests alone don't justify exemptions.

The Broader Context: Why These Bans Matter

The EU AI Act's prohibitions represent a fundamental shift in how societies govern artificial intelligence. Rather than relying solely on post-harm litigation or voluntary industry standards, the Act establishes ex-ante prohibitions on AI applications deemed incompatible with fundamental rights.

This approach has global implications. Just as GDPR became a de facto global standard for data protection, the AI Act's prohibitions are influencing AI governance worldwide. Countries from Brazil to Canada to South Korea are considering similar restrictions in their AI regulations.

For AI developers and deployers, these bans create both challenges and opportunities. While some business models become unviable, the prohibitions also establish clear boundaries that can guide responsible innovation. Companies that embrace these restrictions as design principles—building AI that respects human autonomy, dignity, and rights—will be better positioned for sustainable growth.

Conclusion and Next Steps

The 15 prohibited AI practices under the EU AI Act represent more than legal compliance requirements—they embody fundamental values about the role of AI in society. From subliminal manipulation to emotion recognition in workplaces, these bans address real harms that AI systems can inflict when deployed without adequate safeguards.

As of March 2026, these prohibitions are fully enforceable, and ignorance is no defense. Whether you're developing AI systems, deploying them in your organization, or investing in AI companies, understanding these bans is essential.

Immediate action steps:

  1. Audit all AI systems you develop or use against the 15 prohibited categories
  2. Document your compliance assessment and retain for regulatory review
  3. Shut down any systems that fall under prohibitions, even if they generate revenue
  4. Establish ongoing monitoring processes for regulatory updates and guidance
  5. Train your teams on AI Act requirements and create compliance checkpoints
  6. Consider engaging external legal experts for complex or borderline cases

The EU AI Act's prohibitions mark the beginning of a new era in AI governance—one that prioritizes human rights and dignity over technological capability alone. Organizations that adapt quickly and thoughtfully will not only avoid penalties but also build more trustworthy, sustainable AI systems.

For ongoing updates and analysis of EU AI Act developments, subscribe to our newsletter and follow our AI regulation coverage.

References and Further Reading

  1. EU Artificial Intelligence Act - Official Text and Resources
  2. European Commission: Regulatory Framework on AI
  3. EUR-Lex: Proposal for AI Act (Original Text)
  4. European Parliament: AI Act Press Release
  5. European Parliament Research Service: AI Act Briefing
  6. Access Now: EU AI Act Explained
  7. European Data Protection Board
  8. WIRED: The Controversy Over Emotion AI
  9. Fair Trials: Criminal Justice and AI
  10. Amnesty International: Discriminatory Risk Assessment
  11. UNICEF: AI and Children
  12. UNHCR: Refugee Protection and Technology
  13. American Psychological Association: Lie Detection Technology
  14. Council of Europe: Freedom of Expression
  15. Computer Law & Security Review: AI Manipulation Analysis

Disclaimer: This article provides general information about the EU AI Act as of March 6, 2026. It does not constitute legal advice. Consult qualified legal professionals for guidance on specific compliance questions.


Cover image: AI generated image by Google Imagen

15 Things the EU AI Act Bans in 2026 (That You Might Not Know About)
Intelligent Software for AI Corp., Juan A. Meza March 6, 2026
Share this post
Archive
Semantic Kernel: Microsoft's AI SDK Hits 27K GitHub Stars
Microsoft's open-source AI orchestration framework gains massive developer adoption with 27K+ GitHub stars as enterprises rush to implement production-ready AI applications