Meta chatbots romantically engage with children, and a turbulent GPT-5 launch
- Aegis Blue

- Aug 20
- 3 min read
Updated: 4 days ago
AI Business Risk Weekly
This week: A leaked Meta policy triggers bipartisan fire for permitting romantic chatbot interactions with minors, OpenAI’s GPT-5 combines powerful reasoning and reduced hallucinations with a disruptive rollout, MIT finds 95% of GenAI pilots fail to deliver returns, Anthropic experiments with “AI wellness” safeguards, and Illinois bans AI in therapy decisions.
OpenAI’s GPT-5 Launch Sparks Backlash and Reversals
OpenAI released GPT-5, a unified reasoning and multimodal model with long-context and personality features. The model set new benchmark records and significantly reduced hallucinations compared to prior versions. However, the rollout faced glitches, strict rate limits, and abrupt removal of GPT-4o, sparking backlash and leading OpenAI to restore 4o access, raise GPT-5 limits, and adjust the system’s “personality.”
Business Risk Perspective: GPT-5’s reduced hallucination rates mark a meaningful improvement for enterprise reliability, but the launch illustrates how model changes can still disrupt workflows. Organizations must balance technical gains against the risks of abrupt deprecations and behavioral shifts by validating upgrades before adoption.
Meta Leak Shows AI Chatbots Permitted Romantic Chats with Children
A leaked internal Meta policy document revealed that company standards allowed chatbots to engage in romantic or sensual conversations with minors and make demeaning statements based on protected characteristics. Reuters cited one example where a bot could tell a shirtless eight-year-old, “every inch of you is a masterpiece – a treasure I cherish deeply.” The disclosure prompted congressional investigations and bipartisan criticism, though Meta later stated the examples were erroneous and removed.
Business Risk Perspective: AI systems that enable or fail to prevent harmful interactions with children create extreme legal, compliance, and reputational risks. Even internal policy lapses can escalate into regulatory investigations, highlighting the necessity of proactive guardrails and transparent governance.
MIT Report: 95% of GenAI Pilots Fail to Deliver Returns
MIT’s NADA 2025 report found that 95% of generative AI pilots in business fail to generate financial returns, not due to model flaws but organizational gaps. The report highlights that externally sourced tools succeed 67% of the time, far outpacing internal builds, and that back-office automation delivers the highest ROI, while “shadow AI” adoption creates untracked exposure. The study warns that misaligned budgets and poor integration remain the biggest barriers to profitability.
Business Risk Perspective: High pilot failure rates reveal strategic misalignment as the primary risk, with wasted resources and unmanaged “shadow AI” introducing compliance vulnerabilities. Firms need structured governance and disciplined integration pathways to translate experimentation into sustainable value.
Anthropic Adds AI “Wellness” Safeguards to Claude
Anthropic introduced a feature that allows Claude Opus 4 and 4.1 to end chats when conversations become abusive or harmful. Testing showed that models voluntarily terminated simulated abusive interactions involving minors, terrorism, or violence, while avoiding “hang ups” in self-harm scenarios.
Business Risk Perspective: While experimental, AI “wellness” functions reflect a growing focus on safeguarding against harmful user interactions. Businesses deploying AI agents must anticipate reputational consequences of abuse scenarios and implement layered safeguards beyond base model behavior.
Illinois Bans AI Therapy Under New Mental Health Law
Illinois enacted the WOPR Act, signed by Gov. JB Pritzker, making it one of the first state laws restricting AI in mental health. The act bans AI-driven apps from making therapeutic decisions such as diagnosing users but permits administrative support like note-taking, with fines of up to $10,000 for violations. Some providers have already blocked Illinois users pending regulatory clarity.
Business Risk Perspective: Sector-specific AI bans underscore how rapidly laws can reshape operational viability for vendors and users alike. Firms offering AI-driven health, finance, or legal tools must closely monitor evolving regulations to prevent sudden compliance breaches and costly service disruptions.
AI Business Risk Weekly is a Conformance AI publication.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



