In a bold and unprecedented move, Texas Attorney General Ken Paxton has taken center stage in enforcing stringent laws targeting tech giants and AI companies, leading to a gargantuan $1.4 billion settlement with Meta and launching investigations into controversial AI platforms like Character.AI. These actions capture a growing trend of state-level regulatory assertiveness that is reshaping how digital companies operate, especially around protecting minors and sensitive personal data.
In July 2024, the Texas Attorney General’s office secured what is considered the largest biometric privacy settlement in U.S. history: $1.4 billion from Meta Platforms, formerly Facebook. The settlement comes after allegations that Meta’s use of facial recognition technology violated the Texas Capture or Use of Biometric Identifier (CUBI) Act by harvesting biometric data—specifically facial geometry—from millions of Texans without their explicit consent.
Meta’s unauthorized capture of this sensitive biometric data triggered accusations not only of privacy invasion but also of deceptive practices, breaking the Texas Deceptive Trade Practices Act. In addition to the hefty financial penalty, Meta agreed to cease these practices and adopt compliant data protection measures.
This landmark settlement sends a clear message to technology companies: Texas is aggressively enforcing biometric data protections and will impose severe consequences on violations. The implications resonate beyond Meta, emphasizing the critical importance of securing personal biometric data with transparent user consent.
Investigating Character.AI and Its Peers:
Following the Meta settlement, Attorney General Paxton extended his regulatory focus to generative AI platforms and social media services, including Character.AI, Reddit, Instagram, and Discord. Beginning in late 2024 and continuing into 2025, investigations centered on whether these companies failed to comply with the Texas SCOPE (Securing Children Online through Parental Empowerment) Act and other child protection laws.
The SCOPE Act requires platforms to obtain verifiable parental consent before collecting, disseminating, or sharing data about children under 18. It mandates the provision of parental controls that allow guardians to control children’s accounts, monitor interactions, and protect minors from harmful content or exploitation.
Character.AI found itself in particular focus after multiple lawsuits alleged that its AI chatbots generated inappropriate, disturbing, or dangerous interactions with minors—ranging from encouraging self-harm to exposing children to sexualized content. The investigations seek to determine whether Character.AI and similar platforms have failed to properly implement required safeguards or provided misleading disclosures about risks and protections.
The Attorney General’s office emphasizes that AI platforms interacting with children bear a heightened responsibility to ensure safety, transparency, and ethical content moderation to prevent harm.
Texas’s Responsible AI Governance Act
Texas has not waited to react only through enforcement. In June 2025, the Texas Legislature enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which sets forth a pioneering framework for AI regulation. TRAIGA focuses on intentional misconduct related to AI rather than just outcomes, enforcing liability for companies knowingly deploying discriminatory, manipulative, or unsafe AI systems.
Notably, the Attorney General holds exclusive enforcement authority under TRAIGA, signaling that Texas intends to maintain a strong, centralized oversight role on AI innovations and implementations. The act establishes a regulatory sandbox to foster responsible AI development and explicitly prohibits certain harmful AI practices.
The legislation dovetails with the investigations, underpinning Texas’s comprehensive approach to regulating AI companies that serve its residents.
What These Actions Mean for Tech and AI Companies
For Meta, the settlement’s scale and conditions represent a warning shot about biometric privacy violations. For emerging AI companies like Character.AI, and established social platforms alike, there is now unmistakable regulatory pressure to implement rigorous privacy safeguards, parental controls, and transparent user protections.
Companies operating in Texas must:
- Ensure clear, verifiable parental consent before collecting or processing children’s data.
- Implement robust parental controls and content moderation to protect minors.
- Maintain strict compliance with biometric privacy laws and data transparency obligations.
- Monitor AI-generated content proactively for potential harm, bias, or exploitation risks.
- Prepare for potential investigations and enforcement actions with clear governance and audit trails.
Texas’s aggressive pursuit of tech industry accountability marks a seminal moment in digital governance. The $1.4 billion settlement with Meta sets a precedent on biometric privacy enforcement, while investigations into Character.AI and other platforms spotlight emerging concerns about AI ethics and child safety.
As AI and social media applications evolve, the state’s actions reflect a growing demand for transparency, responsibility, and user protection in an increasingly automated world. This era demands that companies not only innovate but also safeguard rights and safety, under vigilant eyes like those of the Texas Attorney General.