Regulatory investigations into privacy practices follow patterns. The FTC's enforcement actions, state attorney general investigations, and GDPR enforcement decisions share common methodologies: they start with observable behavior, compare it against stated policies and legal requirements, and build enforcement cases around the gaps they find. Understanding what regulators look for — the specific signals that trigger investigations, the methodologies they use to assess compliance, and the evidence they rely on — is essential for any organization that wants to find and address its exposure before an investigator does.
Privacy investigations rarely begin with a company's self-disclosure of a problem. They typically originate from one of four sources: consumer complaints, technical tip-offs from privacy researchers or journalists, coordinated enforcement initiatives targeting specific industries or practices, and referrals from other regulators. Understanding these origins shapes how organizations should think about their exposure.
Consumer complaints are the most common trigger for FTC investigations. The FTC receives hundreds of thousands of consumer complaints annually through its Consumer Sentinel database. Complaints about deceptive privacy practices, unexpected data sharing, and failure to honor opt-out requests are systematically reviewed and can trigger investigations even when individual complaints don't allege significant harm. A pattern of similar complaints across a company's customer base is a significant investigation trigger.
Technical research is an increasingly significant investigation source. Academic privacy researchers, investigative journalists, and consumer advocacy organizations routinely conduct technical studies of company data practices. When these studies identify discrepancies between stated policies and observable practices — and publish the findings — regulators take notice. The Cambridge Analytica investigation, the FTC's Google actions, and numerous state attorney general investigations were preceded by public technical research that documented specific practices.
Coordinated enforcement initiatives target specific technologies or industries. The FTC's health app enforcement sweep, the state attorney general coalition's investigation into mobile location data, and GDPR enforcement initiatives targeting cookie consent banners and behavioral advertising have all demonstrated the pattern: regulators identify a practice they believe is widespread and legally problematic, then conduct coordinated investigations across multiple companies simultaneously.
The FTC's privacy enforcement authority flows from Section 5 of the FTC Act, which prohibits unfair or deceptive acts and practices. In the privacy context, this translates to two primary theories: deception (a company's privacy practices don't match its stated policies) and unfairness (the practices cause substantial harm to consumers that isn't outweighed by countervailing benefits).
For deception-based cases, the FTC's investigative methodology focuses on comparing actual data practices to stated policies. Investigators use technical tools to analyze what data flows exist from a company's digital properties, what identifiers are transmitted to third parties, and what those third parties do with the data. This analysis is compared against the company's privacy policy, terms of service, and any specific representations made in marketing materials or product interfaces. Where the actual behavior is materially inconsistent with the stated policies, the company has potential Section 5 deception exposure.
For unfairness-based cases, the FTC looks for practices that consumers couldn't reasonably anticipate and that create meaningful risks of harm. Collection of sensitive data without clear disclosure, use of dark patterns to obtain nominal consent for practices consumers wouldn't knowingly agree to, and sharing of sensitive personal information in ways that create risks of discrimination, financial harm, or physical danger have all supported unfairness theories in recent enforcement actions.
"The question we ask is: did the company do what it said it would do? And if it didn't, did it have a good reason? Most of the time, the answer to both questions is no." — Former FTC staff attorney
State attorneys general enforcement varies significantly by state, but several consistent patterns emerge from the investigations that have been made public.
California's CPPA has been most explicit about its methodology. The agency has issued guidance describing its enforcement priorities: companies that collect sensitive personal information without adequate disclosure, companies that fail to honor consumer rights requests (particularly opt-out requests and data deletion), companies that use dark patterns to undermine consent, and companies whose privacy policies don't accurately describe their actual practices. The CPPA has also initiated audits of specific industries — connected vehicles, healthcare, and children's applications — using technical analysis to assess compliance before reaching out to companies.
Colorado's attorney general has been active in data protection assessment enforcement. Colorado's comprehensive privacy law requires data protection assessments for high-risk processing activities. The AG's office has made clear that it will request these assessments and that companies that haven't conducted them — or whose assessments don't adequately address the required factors — face enforcement risk.
Texas and Oregon have both signaled aggressive enforcement postures. Texas has focused particularly on health data, children's privacy, and targeted advertising practices. Oregon's broad law, which covers nonprofits and includes strong sensitive data protections, has been accompanied by enforcement guidance indicating the AG will prioritize companies with clear observable compliance gaps.
Understanding the specific technical analysis regulators employ helps organizations identify and prioritize their own exposure assessment.
Network traffic analysis is fundamental. Regulators — and the privacy researchers whose work often triggers investigations — use tools that capture and analyze the HTTP/HTTPS requests a website or application makes when a user interacts with it. This analysis reveals what data is transmitted to third parties: which identifiers, which behavioral signals, which user-inputted information. Comparing this traffic to the privacy policy's disclosure of third-party sharing is a straightforward way to identify gaps.
JavaScript analysis identifies what third-party code is loaded on web properties. The presence of specific tracking technologies — Meta Pixel, Google Analytics, DoubleClick, session replay tools, ad exchange integrations — is directly observable. Each of these tools has documented data collection behavior that can be mapped to applicable legal frameworks. A regulator who knows what JavaScript is running on a company's site knows a great deal about its data practices before any investigation formally begins.
Privacy policy analysis involves natural language processing and manual review to identify specific claims and compare them against observed behavior. Regulators look for categorical statements that are contradicted by technical observations: "We do not sell your personal information" combined with programmatic advertising integrations, "We do not share your data with third parties" combined with observed pixel transmissions, "We collect only the data necessary for our services" combined with extensive behavioral tracking on pages unrelated to the company's core service.
The practical implication is clear: organizations can conduct the same analysis regulators use, against themselves, before an investigation begins. Outside-in technical analysis of actual data practices, compared against stated policies and applicable legal requirements, produces a prioritized view of enforcement exposure that allows organizations to address the highest-risk gaps before they become investigation triggers.
This approach requires tools and methodology that most internal privacy programs haven't traditionally needed. The relevant skills are technical — network traffic analysis, JavaScript reverse engineering, privacy policy parsing — combined with legal analysis that maps observed practices to specific statutory frameworks. Organizations that build this capability internally or access it through specialized platforms are operating with the same information set that regulators use, which is the foundation for meaningful risk management rather than reactive compliance.
The regulators are going to look. The question is whether you've already looked first.