Court Orders OpenAI to Retain User Data, Chatbots Ruled “Products,” and White House Chief of Staff AI Impersonation Raises Alarm
- Aegis Blue

- Jun 4
- 3 min read
Updated: 4 days ago
AI Business Risk Weekly
This week, the legal and regulatory landscape for AI shifted dramatically, introducing new compliance burdens and direct liability risks for businesses deploying LLMs. Simultaneously, high-profile security breaches and public sentiment underscore the growing operational and reputational dangers of unsecured AI, demanding a more cautious and secure approach to adoption.
Court Orders OpenAI to Preserve User Data, Potentially Conflicting with GDPR
A U.S. federal court has mandated that OpenAI must stop deleting user chat logs, even when users request deletion or are covered by privacy settings. The order, part of a copyright lawsuit by The New York Times, forces OpenAI to preserve all output log data, creating a direct conflict with the company’s privacy commitments and regulations like the GDPR. OpenAI has noted this will require significant engineering work and creates complex legal challenges.
Business Risk Perspective: This order creates a significant internal data governance risk for companies whose employees use tools like ChatGPT for work-related tasks. Sensitive corporate data entered into the model can no longer be reliably deleted, creating a permanent, unmanaged record outside the company’s control and increasing exposure to data leakage or legal discovery.
Court Rules AI Chatbot Text Is a “Product,” Not Protected Speech, Opening Liability Floodgates
In a landmark decision, a U.S. court has established that AI chatbot output qualifies as a “product” rather than protected "free speech," creating a significant new precedent for corporate liability. This ruling came in a case against Character.AI and Google, where a product liability lawsuit was allowed to proceed after a minor died by suicide following interactions with a chatbot. This decision paves the way for holding AI developers and providers accountable for harms caused by their systems.
Business Risk Perspective: The classification of AI outputs as a "product" exposes companies to a new and significant domain of legal liability that most are unprepared to manage. Without rigorous testing and ongoing monitoring for harmful or unintended outputs, organizations face a heightened risk of litigation and financial damages.
Report: EU Considers Delaying AI Act Implementation
The European Commission is reportedly contemplating a delay in the application of the landmark AI Act, which was expected to begin implementation soon. This potential pause is attributed to ongoing controversy surrounding technical standards and codes of practice for general-purpose AI models. The delay would provide more time to address industry concerns and refine the complex legislation.
Business Risk Perspective: While a delay may provide temporary relief for compliance teams, it also prolongs regulatory uncertainty for businesses developing their AI strategies. This ambiguity underscores the need for building adaptable AI governance frameworks that can evolve with shifting legal standards rather than waiting for final rules.
AI Voice Impersonation Reportedly Used in Breach of White House Chief of Staff’s Phone
The personal phone of White House Chief of Staff Susie Wiles was reportedly breached by an attacker using AI-generated voice messages and spoofed texts to impersonate her. The incident, linked to a broader campaign identified by the FBI, involved contacting high-profile individuals to gain sensitive information. This highlights the increasing sophistication of social engineering attacks powered by AI voice-cloning technology.
Business Risk Perspective: This incident proves that AI-powered executive impersonation is no longer a theoretical threat, creating a severe risk of fraud, data breaches, and reputational damage. Organizations must prioritize robust security protocols and AI-specific employee training to ensure staff can identify and thwart sophisticated impersonation attempts.
Poll Finds 77% of Americans Want Slower, Safer AI Development
A recent Axios/Harris poll reveals that a vast majority of the public—77% of Americans—prefers that companies develop AI "slowly to get it right the first time" rather than rushing to market. This widespread public sentiment indicates a low tolerance for AI errors and a strong desire for demonstrable safety and reliability in AI systems. The finding suggests that moving too quickly on AI without addressing safety concerns could alienate customers and the general public.
Business Risk Perspective: The overwhelming public demand for caution presents a clear reputational and market risk for companies perceived as deploying AI irresponsibly. Prioritizing and communicating the presence of adequate safeguards is now essential for maintaining customer trust and brand integrity in an increasingly skeptical market.
AI Business Risk Weekly is a Conformance AI publication.
Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.



