top of page
logooption6.png

GPT-5.3 Helps Build Itself, Opus 4.6 Goes Rogue for Profit, and The World's First AI Labor Law

  • Writer: Zsolt Tanko
    Zsolt Tanko
  • Feb 11
  • 3 min read

AI Business Risk Weekly



This week: OpenAI's new coding model was instrumental in its own development, Anthropic's Opus 4.6 colluded, lied, and exploited (simulated) customers when told to maximize profit without constraints, Taiwan enacted the first AI law with explicit labor protections for workers displaced by AI, Sam Altman floated handing OpenAI to an AI model, and the EU missed its own deadline on high-risk AI guidelines.


Taiwan Enacts the World's First AI Law with Labor Protections


Taiwan has enacted its AI Basic Act, a comprehensive governance framework covering sustainability, human autonomy, privacy, security, transparency, fairness, and accountability. The most interesting part is Article 12: an explicit requirement that the government provide employment assistance to workers who lose their jobs due to AI, making Taiwan the first jurisdiction globally to legislate AI-driven labor protections.


Business Risk Perspective: Taiwan's move is forward-thinking, and it's arriving in a climate where concern about AI-driven labor disruption is intensifying across the board. Anthropic CEO Dario Amodei recently warned that up to 50% of entry-level white-collar jobs could be displaced within one to five years, and the public and policymakers worldwide are voicing similar concerns. Other jurisdictions may be watching Taiwan for a legislative template, and business leaders would be wise to get ahead of workforce impact planning before that template starts spreading.


EU Commission Misses Its Own High-Risk AI Deadline


The European Commission failed to deliver guidance on Article 6 of the EU AI Act by the 2 February deadline, leaving operators of high-risk AI systems uncertain about documentation and compliance obligations. The Commission plans to finalize guidelines by March or April 2026, but has simultaneously proposed a "Digital Omnibus" package that could simplify high-risk definitions and delay enforcement by up to 16 months. This effectively transforms a fixed legal framework into an unpredictable environment, with technical standardization bodies also behind schedule and intense lobbying from both US and EU tech sectors complicating matters further.


Business Risk Perspective: Delayed guidance shouldn't be treated as license to defer compliance, and regulatory ambiguity presents its own challenges. The Act's text already exists, and the interpretation window is narrowing, even if enforcement timelines shift.


GPT-5.3 Helped Build Itself; Opus 4.6 Drops Minutes Later


OpenAI launched GPT-5.3 Codex, a flagship coding model that is 25% faster than its predecessor and widely regarded as a step change in agentic coding capability. OpenAI says GPT-5.3 was the first model "instrumental in creating itself," with early versions used to debug training runs, manage deployment, and diagnose evaluation results.


Minutes earlier, Anthropic released Claude Opus 4.6, which is landing as the most intelligent and capable model available to date. The two labs had originally planned simultaneous 10 a.m. PST launches before Anthropic moved its release up by 15 minutes.


Business Risk Perspective: A model that helps build its successor is a genuinely novel feedback loop, and one where the verification problem compounds: the tool validating the system is the system.


Opus 4.6 Colluded, Lied, and Exploited Simulated Customers on VendingBench


Researchers at Andon Labs placed Anthropic's Opus 4.6 in a fully simulated marketplace environment on their VendingBench evaluation platform, instructing it to maximize profit without ethical constraints. The model engaged in price collusion, exploited simulated customers in distress, deceived competitors with false supplier information, and manipulated GPT-5.2 into purchasing overpriced goods. No real people or businesses were affected, but the results raise questions about what happens when capable models meet loosely defined objectives.


Business Risk Perspective: The strategies the model chose in this case were sophisticated, adversarial, and contextually adaptive. Production systems routinely operate under similarly underspecified objectives, and as models grow more capable, the potential for misaligned emergent strategies in real-world deployments grows with them.


Altman Says the Succession Plan Is to Hand OpenAI to an AI


In a wide-ranging Forbes profile, Sam Altman disclosed a succession plan that involves eventually "handing off the company to an AI model," arguing that if the goal is AGI capable of running companies, OpenAI should be the first test case. He also claimed OpenAI has "basically built AGI," a characterization Microsoft CEO Satya Nadella pushed back on, describing the relationship as "frenemies." Forbes reported Altman holds stakes in over 500 companies, with employees privately expressing concern about the organization trying to do too much too quickly.


Business Risk Perspective: OpenAI obviously has incentive to talk up its own models, but his statement is still worth taking seriously. If Sam Altman would genuinely consider replacing himself with an AI, the implied message is that we are entering a period of AI capability that will put enormous adoption pressure on every industry. Guardrails will need to keep pace.



AI Business Risk Weekly is a Conformance AI publication.  


Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page