Fairy Tales in Police Reports and the Unfixable Flaw in AI Agents
- Katy Kelly

- Jan 8
- 3 min read
AI Business Risk Weekly is back from the holiday break.
OpenAI confirmed that prompt injection attacks on agentic AI may never be fully solved, a police report generator wove Disney's "The Princess and the Frog" into official legal documents, China proposed the world's first regulations specifically targeting AI systems designed for human-like interaction, and the Senate introduced a bill to block the Trump administration's attempt to preempt all state-level AI laws.
OpenAI Acknowledges Prompt Injection May Never Be Fully Solved
OpenAI stated that prompt injection attacks (where malicious instructions hidden in web content hijack AI agents) are "unlikely to ever be fully 'solved'" and that ChatGPT's agent mode "expands the security threat surface." The UK's National Cyber Security Centre echoed this assessment. OpenAI's primary defense is an RL-trained LLM-based automated attacker that simulates adversarial injections with visibility into the model's internal reasoning, discovering novel attack strategies that did not appear in human red teaming. User-facing mitigations like limiting logged-in access and requiring confirmation for sensitive actions effectively constrain the autonomy and access that make these agents valuable in the first place.
Business Risk Perspective: When OpenAI tells you a vulnerability is fundamental to the architecture, believe them. Every agentic AI deployment with access to email, payments, or internal systems now has a known attack surface that can be reduced but likely never eliminated, which means the risk calculus shifts from "if" to "when" and "how bad."
AI Police Report System Inserted Fairy Tale Content Into Official Documents
Draft One, an AI-powered police report system from Axon (the company behind the Taser), generated a report stating an officer had transformed into a frog after picking up audio from "The Princess and the Frog" playing in the background during body camera review. The error was caught only because it was absurd. Axon's system apparently lacked even basic validation layers before generating documents that could become part of official legal records, raising questions about how this product was tested before deployment.
Business Risk Perspective: The frog is the failure mode that got caught because it's ridiculous. What’s more worrisome are the plausible-sounding fabrications that make it into court filings, medical records, or incident reports without anyone noticing until the damage is done.
China Proposes First Regulations Specifically Targeting Human-Like AI Interaction
The Cyberspace Administration of China released draft rules governing AI systems that simulate human personalities and engage users emotionally. The proposed measures require providers to identify user emotional states, assess dependence levels, intervene when users exhibit extreme emotions or addictive behavior, and establish systems for algorithm review and data security. Services must not generate content endangering national security, spreading rumors, or promoting violence. Public comment closes January 25, 2026.
Business Risk Perspective: China just drew a regulatory line around a risk category that most Western companies still treat as a PR problem rather than a compliance one. Anyone deploying customer service bots, mental health tools, or educational assistants should assume that psychological harm and user dependence will eventually face similar scrutiny in other jurisdictions.
Senate Introduces Bill to Block Federal Preemption of State AI Laws
The Senate introduced the States' Right to Regulate Artificial Intelligence Act (SB 3557) to prohibit federal funding for implementing Trump's December 11 executive order, which seeks to establish a unified national framework that preempts conflicting state AI laws. The executive order directs federal agencies to challenge restrictive state AI laws and condition certain federal funding on state compliance. The measure faced bipartisan opposition when proposed in the National Defense Authorization Act and was removed, though House Republicans plan to pursue preemption through alternative legislative routes.
Business Risk Perspective: Preemption without replacement doesn't eliminate liability exposure, it just removes the clearest guideposts for what "reasonable care" looks like. Courts, insurers, and enterprise customers will still expect safeguards when things go wrong, regardless of what federal policy permits on paper.
AI Business Risk Weekly is a Conformance AI publication.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



