AI Governance Is the Next Enforcement Frontier

Full name
11 Jan 2022
5 min read

Two years ago, AI governance was a policy discussion happening mostly in academic conferences and Brussels regulatory working groups. Today it is an active enforcement priority for multiple jurisdictions, with real penalties, real investigations, and a growing body of guidance that creates clear expectations for companies deploying AI systems that touch personal data.

The shift happened faster than most privacy professionals expected. The combination of the EU AI Act entering its implementation phases, FTC guidance on AI and unfair or deceptive practices, and state-level legislative action on automated decision-making has created an enforcement environment that requires immediate attention from privacy programs that have focused primarily on data collection and consent compliance.

The EU AI Act: From Compliance Framework to Enforcement Reality

The EU AI Act, which entered into force in August 2024, is the first comprehensive legal framework specifically regulating artificial intelligence. Its risk-based approach categorizes AI systems into prohibited, high-risk, limited-risk, and minimal-risk tiers, with the most stringent requirements applying to high-risk applications in areas including employment, credit assessment, healthcare, education, law enforcement, and critical infrastructure.

The Act's prohibited practices provisions became enforceable first, in February 2025. These include real-time biometric identification in public spaces (with limited exceptions), AI systems that exploit vulnerabilities to manipulate behavior, and social scoring systems. For companies with EU operations or EU-facing AI deployments, this is not an abstract compliance exercise — it is an active enforcement reality with penalties up to 35 million euros or 7% of global annual turnover for prohibited practice violations.

High-risk AI system requirements become fully enforceable beginning August 2026. Companies have a narrowing window to complete conformity assessments, establish technical documentation, implement human oversight mechanisms, and register applicable systems in the EU AI systems database. The compliance workload is substantial, and the organizations that began preparation in 2024 are meaningfully ahead of those that deferred.

"The AI Act is not primarily an AI regulation. It is a product safety regulation applied to AI systems. That reframing changes how compliance teams need to think about their obligations." — EU policy counsel

The FTC and AI: Unfair and Deceptive Practices

The Federal Trade Commission has been the most active US federal regulator in the AI space, operating through its existing authority over unfair and deceptive acts and practices (UDAP) rather than waiting for Congress to pass AI-specific legislation. The FTC's approach has several practical implications for companies deploying AI in consumer-facing contexts.

The FTC has signaled specific concerns about AI-generated content presented as human-created, AI systems making decisions that are unexplainable or that contradict stated policies, AI tools that claim capabilities they don't have, and the use of AI to automate unfair practices at scale. The agency's enforcement actions in adjacent areas — against companies using algorithms in credit, employment, and housing contexts — provide a roadmap for where AI-specific enforcement is headed.

Of particular relevance to companies with privacy programs: the FTC has explicitly linked AI governance to data governance. If a company's AI system was trained on data collected without adequate consent, or if it makes predictions about sensitive personal characteristics (health status, financial situation, family status) without adequate disclosure, those practices are potential UDAP violations under existing authority. AI governance is not separate from privacy compliance — it is downstream of it.

State-Level Automated Decision-Making Laws

Several states have enacted or are advancing laws specifically addressing automated decision-making and algorithmic systems. Colorado, Connecticut, and Virginia's comprehensive privacy laws include provisions requiring companies to conduct data protection assessments before deploying AI systems that profile individuals for consequential decisions. Several states have enacted separate laws addressing AI in employment, housing, or credit contexts.

Colorado's SB 205, the Colorado Artificial Intelligence Act effective February 2026, is the most comprehensive state AI law to date. It applies to developers and deployers of high-risk AI systems — defined as AI systems making or materially influencing consequential decisions affecting Colorado residents. Requirements include risk management policies, impact assessments, annual reports to the attorney general, and consumer rights to appeal decisions made by covered AI systems.

Illinois has amended its biometric privacy law and employment discrimination statutes to address AI-specific scenarios, including video interview analysis tools that use facial expression or voice analysis to assess candidates. Several other states are advancing similar legislation through their 2025-2026 legislative sessions.

What This Requires From Privacy Programs

The convergence of AI governance and privacy compliance creates specific operational requirements that most privacy programs were not designed to address.

AI System Inventory

The foundation of AI governance compliance is knowing what AI systems the organization operates, what data they consume, what decisions or recommendations they generate, and which regulatory frameworks apply to each system. Many organizations lack this inventory. AI systems have been deployed across business units without centralized tracking, often under vendor contracts that don't clearly describe the AI components involved.

High-Risk Classification

Both the EU AI Act and state laws use risk-based frameworks that require classifying AI systems by their potential impact. High-risk classification triggers substantial compliance obligations. Classification requires understanding not just what a system does technically, but what decisions it influences and what populations it affects — an assessment that requires collaboration between technical, legal, and business teams.

Impact Assessments and Documentation

AI impact assessments — required by the EU AI Act, Colorado's AI Act, and several other frameworks — are more demanding than traditional privacy impact assessments. They require technical documentation of model architecture and training data, analysis of potential discriminatory outputs, human oversight mechanisms, and ongoing monitoring plans. Companies that have done robust privacy impact assessments are better positioned to extend that practice to AI, but the AI-specific requirements are distinct.

Transparency and Explainability

Multiple frameworks require that individuals be informed when AI systems are making or influencing decisions that affect them. Where consequential decisions are involved, rights to explanation and appeal are increasingly mandated. This creates product and operational requirements — systems must be built to provide explanations, and processes must exist to handle appeals — that go well beyond legal compliance documentation.

The Intersection With Privacy Risk Intelligence

For companies deploying AI systems that process personal data, privacy risk intelligence takes on additional dimensions. An AI system trained on data collected through practices that violate VPPA, BIPA, or state tracking laws inherits the privacy risk embedded in its training data. A recommendation system that uses behavioral data collected without consent may generate outputs that are both technically accurate and legally problematic.

The companies most exposed to AI governance enforcement are not necessarily those with the most sophisticated AI systems. They are often companies whose AI deployments were built on data practices that didn't anticipate the legal landscape that now surrounds them — and that haven't conducted the retrospective review needed to assess whether their current AI operations rest on a compliant data foundation.