State attorneys general across the United States are sharply escalating pressure on major artificial intelligence companies, warning that unchecked AI “hallucinations” are already causing real-world harm. In a new letter, they call on tech giants including Microsoft, OpenAI, Google, Anthropic, Meta, and Perplexity to urgently fix misleading and fabricated outputs produced by their AI systems — or face potential legal and regulatory consequences.
Why AI “hallucinations” are in the legal crosshairs
AI hallucinations occur when a generative AI system confidently produces information that is false, misleading, or entirely fabricated. Unlike traditional software bugs, hallucinations are a structural risk of large language models (LLMs), which generate text based on patterns in data rather than verified facts.
The attorneys general argue that these errors are not just technical glitches — they are consumer protection issues. When AI tools are integrated into search engines, office software, legal research platforms, or healthcare applications, inaccurate outputs can:
- Damage reputations through false accusations or invented claims
- Mislead consumers about health, finance, legal rights, and public safety
- Distort public understanding of news, politics, and elections
- Amplify bias by fabricating evidence or statistics that seem authoritative
In their view, when AI platforms are marketed as reliable assistants or trusted copilots, companies have a responsibility to make sure their tools do not routinely generate harmful inaccuracies.
Who is being targeted — and why now?
The letter, led by a coalition of state attorneys general, is directed at some of the most powerful firms in the AI ecosystem:
- Microsoft – for its integration of generative AI into Windows, Bing, Office, and enterprise products.
- OpenAI – the developer of ChatGPT and GPT models, widely used by consumers and businesses.
- Google – for its AI-enhanced search, Gemini models, and workplace tools.
- Anthropic – creator of the Claude AI models, positioned as safer and more reliable.
- Meta – for its Llama models and AI features embedded in social platforms.
- Perplexity – an AI-native search and answer engine that positions itself as a more intelligent alternative to traditional search.
The timing is not coincidental. Over the past year, generative AI tools have moved rapidly from experimental products to mass-market infrastructure. They now power chatbots, search engines, productivity apps, and enterprise workflows used by millions. The attorneys general warn that as adoption accelerates, so do the risks of uncorrected hallucinations.
Legal and regulatory concerns: consumer protection meets AI
State attorneys general are some of the most powerful consumer protection enforcers in the U.S. Their letter signals growing willingness to treat AI hallucinations as potential violations of:
- Unfair and deceptive practices laws, when AI tools are marketed as accurate but frequently misinform users.
- Privacy and data protection rules, if AI models mishandle or misrepresent personal data.
- Defamation and reputational harm concerns, particularly when AI fabricates allegations about individuals or organizations.
Their message is clear: if AI systems are being sold or deployed in ways that mislead consumers, state regulators may step in — just as they have historically in cases involving financial products, pharmaceuticals, and digital platforms.
What the attorneys general want AI companies to do
The coalition is not simply criticizing; it is laying out expectations for how AI developers and deployers should respond. Among the key demands are:
- Substantially reduce hallucinations through better model training, evaluation, and safety frameworks.
- Disclose limitations clearly so users understand that outputs may be incomplete, outdated, or incorrect.
- Implement robust correction mechanisms, including ways for users to flag harmful or false content and for companies to fix systemic issues.
- Protect individuals from reputational harm by preventing models from fabricating allegations, diagnoses, or legal claims about real people.
- Increase transparency around training data, testing methodologies, and risk assessments, especially for high-stakes use cases.
These demands reflect a broader regulatory trend: as AI becomes embedded in critical decision-making systems, companies are expected to demonstrate not just innovation, but governance and accountability.
Healthcare, law, and finance: where hallucinations can be most dangerous
The warning is especially resonant in sectors like healthcare, where AI tools are increasingly used for clinical decision support, documentation, and patient education. A hallucinated diagnosis, misinterpreted guideline, or fabricated citation could have serious consequences.
Similarly, in legal and financial settings, AI-generated content has already been implicated in high-profile missteps — including lawyers submitting court filings based on fictitious case law generated by AI tools. These incidents illustrate why regulators are wary of treating hallucinations as harmless quirks.
The attorneys general emphasize that as AI moves into domains traditionally governed by strict standards and professional ethics, the tolerance for error drops sharply.
Industry at a crossroads: innovation versus responsibility
The tech industry has largely acknowledged the problem of hallucinations but often frames it as an inevitable byproduct of a powerful new technology. Many companies are investing heavily in techniques to improve factual accuracy, such as:
- Retrieval-augmented generation (RAG), where models pull from verified databases instead of relying solely on internal patterns.
- Fine-tuning on curated, domain-specific data for sensitive fields like medicine or law.
- Stricter guardrails and content filters to prevent speculative or harmful outputs.
However, the attorneys general are effectively arguing that voluntary self-regulation is no longer sufficient. As AI becomes a foundational technology for the digital economy, they want binding standards and clear accountability for harms arising from misleading outputs.
What this means for the future of AI governance
This coordinated action by state attorneys general adds to mounting global pressure on AI companies from regulators in the U.S., Europe, and beyond. It suggests that the next phase of AI development will be shaped not only by engineering breakthroughs, but by legal, ethical, and societal constraints.
For businesses deploying AI, the message is equally important: relying on off-the-shelf models without safeguards, human oversight, or clear user disclosures may carry significant regulatory risk. Organizations will need to evaluate not only what AI can do, but how reliably and responsibly it operates in their specific context.
Ultimately, the attorneys general are drawing a line: powerful AI systems that confidently generate false information are not just a technical challenge — they are a public policy problem. How Microsoft, OpenAI, Google, Anthropic, Meta, Perplexity, and others respond will help determine whether generative AI is viewed as a trusted infrastructure for the next decade, or as an unstable tool in need of constant legal intervention.
Reference Sources
The New York Times – State Attorneys General Press AI Firms Over Harmful ‘Hallucinations’
AP News – State attorneys general demand AI companies address risks from ‘hallucinations’







Leave a Reply