top of page
conformancesmall.jpg

AI Accelerates, Regulations Slow: Gemini 3 Lands as EU Delays

  • Writer: Aegis Blue
    Aegis Blue
  • Nov 19
  • 4 min read

Updated: 4 days ago

AI Business Risk Weekly


The divide between technology's speed and regulation's reality grew wider this week. While Gemini 3 landed with a bang and frontier labs promised a “compressed century” of progress, regulators in Brussels showed signs of fatigue. The EU is set to delay the enforcement of its foundational AI Act, while New York quietly moved ahead, passing a targeted safety law for AI companion bots, as a new benchmark sheds light on political bias in AI models.


Anthropic CEO Warns of "Compressed Century" and Job Shock


In a candid interview on CBS’s 60 Minutes, Anthropic CEO Dario Amodei said AI could be a massive accelerant, driving breakthroughs in things like cancer cures and Alzheimer’s prevention while compressing a century of progress into a few decades.


However, he repeated his dire warning about the labor market: AI could eliminate roughly half of entry-level white-collar jobs and push unemployment to 10–20 percent within one to five years. Amodei stated he's "deeply uncomfortable" with a small group of executives making such societal decisions and acknowledged that, until governments step in, companies are mostly left to police their own deployment risks.


Business Risk Perspective: If even cautious labs expect rapid automation, organizations that integrate AI without a clear workforce and skills strategy risk deep disruption and morale crashes. Your governance plans must look beyond just technical failures to address potential political backlash, reputation damage, and labor issues.


Google Launches Gemini 3 with Stronger Reasoning and Deep Integration


ree

Google announced Gemini 3, its most advanced multimodal model to date, featuring improved reasoning and context handling, record benchmark scores, and a new coding experience. The model is being rolled out immediately across the Gemini app, AI Mode in Search, AI Studio, Vertex AI, and its new Antigravity development platform.


Google emphasizes Gemini 3's ability to better grasp context and intent with less prompting and is positioning it as a single model capable of powering both consumer and enterprise use cases.


Business Risk Perspective: Don't just assume better benchmarks mean lower business risk. Even if a team that can simply switch a dropdown to the latest model, organizations must treat every major model upgrade as a change-management event. This requires regression testing, red-teaming, and clear rollback plans.


The EU’s Digital Rulebook Faces Delays and Simplification


The European Commission is moving forward with a "Digital Omnibus" package aimed at simplifying major digital rules, including the AI Act, GDPR, ePrivacy, the Data Act, and NIS2. The focus is shifting toward competitiveness and reducing compliance costs.


Drafts and reports suggest several impactful changes, which are already sparking controversy. Social democrats, Greens, and civil society groups warn this could be an effective rollback of digital rights and AI protections.


Key Changes Being Discussed:


  • High-Risk AI Rules Delayed: Implementation of core high-risk obligations could slip by a year, moving from August 2026 to August 2027, mainly because the necessary technical standards aren't ready.

  • Centralizing Oversight: The AI Office in Brussels may get stronger supervisory power over general-purpose AI (GPAI) systems, which should reduce fragmentation among individual national regulators.

  • A Lighter Touch for Smaller Firms: Penalties and some registration duties may be scaled down for "small mid-caps," expanding the existing proportionality rules for smaller businesses.

  • Political Resistance: Social democrats and Greens are publicly opposing delays to high-risk rules, as well as weakening transparency or dropping mandatory AI literacy obligations for deployers.


Business Risk Perspective: A delay might feel like a break, but it also lengthens the window where boards and regulators expect you to be preparing anyway, especially in high-risk areas. As Brussels centralizes oversight, it’s all the more important to track the moving target of this constantly rewritten rulebook.


New York Creates a Safety Baseline for AI Companions


New York’s new act on AI companions took effect on November 5th. This law is narrow but concrete, placing explicit obligations on providers of AI companions used in personal, ongoing interactions. It’s a direct response to recent high-profile cases of AI-linked manipulation and self-harm in emotionally intense chat sessions.


Core Requirements:


  • Mandatory Crisis Protocols: AI companions must have clear protocols to respond to possible suicidal ideation, self-harm, or threats of physical or financial harm. This includes directing users to hotlines or other crisis services.

  • Regular "Not a Human" Disclosures: Providers must notify users that the AI companion is a computer program, cannot feel human emotion, and is not a person. This disclosure is required at the start of an interaction and at least every three hours.

  • Private Right of Action: Individuals harmed by violations (including self-harm or harm to others) can sue for damages.


Business Risk Perspective: Any conversational product that sustains long, emotionally charged exchanges must now assume courts will treat it as a de facto companion product, especially if other U.S. states copy this template. Waiting for location-specific warnings instead of building global protocols, logging, and escalation paths for self-harm and violence content is a direct path to a compliance headache.


Anthropic Open-Sources Tool for Political Bias Evaluation


Anthropic released an open-source framework to measure political bias, complete with a report comparing its Claude model to GPT-5, Gemini, Grok, and Llama across various "even-handedness" tests.


The company aims for behavior that accurately represents diverse political viewpoints while avoiding "performative neutrality" that simply erases real disagreements. They acknowledge the evaluation is a work in progress.


Business Risk Perspective: If you’re deploying chatbots in sensitive domains, relying on an informal "it seems fine" assessment of bias is a huge liability. A chatbot that exhibits perceived bias can quickly lead to a social media backlash, eroding customer trust and causing significant reputational damage. Adopting or adapting open frameworks like this strengthens your defense if you're ever challenged on ideological fairness.



AI Business Risk Weekly is a Conformance AI publication.  


Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page