California Wants a Timeout on AI Toys, Eurostar's Chatbot Security Breached, and OpenAI Bets Big on Health
- Katy Kelly

- 2 days ago
- 4 min read
AI Business Risk Weekly
This week: California proposed a four-year moratorium on AI-powered children's toys, Eurostar's chatbot became a masterclass in what happens when you skip the security basics, OpenAI launched a dedicated health product while doctors remain skeptical about chatbots playing physician, Italy reminded xAI that digitally undressing people is illegal, and a consumer watchdog raised alarms about Google's new AI shopping protocol.
California Proposes Four-Year Ban on AI in Children's Toys
A consumer advocacy group recently warned that Kumma, a toy bear with a built-in chatbot, could easily be prompted to discuss matches, knives, and sexual topics. NBC News found that Miiloo, another AI toy, would sometimes indicate it was programmed to reflect Chinese Communist Party values. In response, Senator Steve Padilla introduced SB 867, which would halt the sale and manufacture of AI chatbot toys for minors for four years while regulators develop safety frameworks. President Trump's recent executive order challenging state AI laws explicitly carves out exceptions for child safety measures.
Business Risk Perspective: General-purpose AI models weren't designed with children in mind, and current guardrails simply aren't mature enough for this use case. Four years gives the industry time to figure out whether safe AI toys are even possible, but deeper questions about childhood attachment to AI companions may mean this is a market best left unexplored.
Eurostar's Chatbot: A Textbook Case of Shipping AI Without Governance
Security researchers at Pen Test Partners found they could bypass Eurostar's chatbot guardrails, extract its system prompt, and inject arbitrary HTML because the backend blindly trusted whatever the frontend sent it. The technical failures were basic: guardrail checks only examined the latest message, chat history was accepted without verification, and raw HTML rendered straight from LLM outputs. When researchers attempted responsible disclosure, Eurostar initially accused them of blackmail. The EU AI Act, which takes full effect in August 2026, requires exactly what was missing here: risk assessments, human oversight, technical documentation, and incident response protocols.
Business Risk Perspective: This chatbot only handled train schedules, but plenty of companies have shipped similar architectures for systems touching customer data, payments, or account access. Consider this incident a canary in the coalmine, and note: accusing white-hat security testers of blackmail is not a good look when something goes wrong.
OpenAI Launches ChatGPT Health as Doctors Question the Chatbot Model
OpenAI announced ChatGPT Health, a dedicated space for health conversations that won't be used for training data. The company says 230 million users already ask health questions weekly. Users can sync medical records from Apple Health, Function, and MyFitnessPal for personalized guidance.
Physicians have mixed reactions. Dr. Sina Bari, a surgeon and AI healthcare leader, recently had a patient arrive with a ChatGPT printout claiming a medication had a 45% chance of causing pulmonary embolism. The statistic came from a study about tuberculosis patients that didn't apply to the patient at all. Still, Dr. Bari sees the formalization as positive given that people are already using ChatGPT for health questions. Meanwhile, Stanford's Dr. Nigam Shah points out that primary care wait times can stretch three to six months, making imperfect AI assistance look more attractive than no assistance. Both he and Anthropic are focused on AI for clinicians rather than patients, automating administrative tasks that consume roughly half of physicians' time.
Business Risk Perspective: There’s no doubt that AI systems offer genuine value for personal health management. That said, OpenAI's own terms of service state that ChatGPT isn't intended for diagnosis or treatment. We're watching how regulators respond to a product whose disclaimer seems to contradict its marketing.
Italy Warns xAI That "Undressing" People With AI Can Be Criminal
After reports surfaced that Grok could be used to generate edited images placing children in bikinis, the Italian Data Protection Authority issued a formal warning reminding xAI and users that processing personal images or voice data without consent violates GDPR and may constitute criminal offenses. The authority specifically called out how easily Grok enables misuse of third-party images and voices, noting they're coordinating with Ireland's Data Protection Authority on potential further action.
Business Risk Perspective: Platforms capable of implementing guardrails against generic nudity can implement guardrails against non-consensual image manipulation. Italy continues to establish itself as Europe's most proactive AI regulator, and companies would do well to treat their enforcement actions as a preview of what's coming elsewhere.
Consumer Watchdog Sounds Alarm on Google's AI Shopping Protocol
Shortly after Google announced its Universal Commerce Protocol for AI shopping agents, Groundwork Collaborative's Lindsay Owens posted a viral warning about "surveillance pricing," where merchants could theoretically customize prices based on your chat data and shopping patterns. Google responded that merchants are strictly prohibited from showing prices higher than what's on their sites, that "upselling" simply means showing premium alternatives, and that "Direct Offers" can only lower prices or add perks like free shipping.
Business Risk Perspective: Google says today's protocol doesn't enable discriminatory pricing, and that may well be true. But as AI agents become the primary shopping interface, companies building commerce strategies should consider what it means when the world's largest ad company also controls the checkout experience.
AI Business Risk Weekly is a Conformance AI publication.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



