top of page
conformancesmall.jpg

Newsom signs SB-53, the FTC targets hype, and Anthropic raises the coding bar

  • Writer: Aegis Blue
    Aegis Blue
  • Oct 1
  • 3 min read

Updated: 4 days ago

AI Business Risk Weekly


California just passed a first-of-its-kind law for frontier AI, the FTC is going after misleading AI claims, and Anthropic says its newest model sets a coding benchmark at a fraction of the cost. Lawyers are being warned their licenses could be at risk from AI misuse, while OpenAI is pushing a new way to measure progress that ties directly to economically valuable work.


California’s SB-53 sets a new bar for frontier AI


On September 29, Governor Newsom signed SB-53, a law that forces large AI developers to publish safety frameworks, summarize catastrophic-risk assessments, and report critical incidents. The bill also adds whistleblower protections and penalties, and early explainers highlight that annual updates and incident reports will become part of the compliance routine. With California’s history of shaping national standards, SB-53 could serve as a template others follow.


Business Risk Perspective: SB-53 signals what “good governance” will soon mean in practice: safety frameworks, incident hotlines, and documented risk processes. We’ve published our own writeup on what the law requires and how to prepare — worth a look before due-diligence questions start landing on your desk.


FTC cracks down on deceptive AI claims


Misleading advertising around AI capabilities is now firmly on the FTC’s radar. The Commission announced new enforcement actions against companies exaggerating what their systems can do, underscoring that claims must be truthful, evidence-based, and backed by documented testing. Companies without proof that their systems deliver as promised risk regulatory penalties as well as reputational fallout.


Business Risk Perspective: The FTC’s focus on hype means every claim is a potential liability if reality doesn’t hold up. The bigger risk isn’t the fine, it’s when a regulator points out that your “AI-powered” promise doesn’t actually exist.


Anthropic’s Claude Sonnet 4.5 pushes toward stronger agents


Anthropic rolled out Sonnet 4.5, now the top-benchmarked coding model, with reports that it beats the previous leader Opus 4.1 at around 5x lower cost. It showed stronger multi-hour coding runs, higher scores on computer-use tasks, and published a detailed system card that included interpretability probes and checks for reward hacking. In addition, the system card itself is notable — an open look at how they’re testing for subtle safety issues.


Business Risk Perspective: AI coding is moving from novelty to infrastructure. It’s encouraging to see a frontier lab make its safety testing visible, but cost drops and capability gains mean pressure to scale agents will rise faster than the guardrails around them.


Legal risk warning: AI errors could cost lawyers their licenses


A senior LexisNexis executive cautioned that it’s only a matter of time before an attorney loses their license over AI misuse, pointing to a growing number of sanctions tied to fabricated citations and faulty filings. Fortune reported the comments in the context of rising concern that legal workflows are adopting tools without strong oversight.


Business Risk Perspective: We’ve already seen lawyers receive penalties for hallucinated citations in court. This is the logical next step, and it’s possible that other licensed professions will follow suit.


OpenAI reframes benchmarks with GDPval


OpenAI introduced GDPval, a new evaluation suite spanning 1,320 tasks across 44 occupations, with complexity closer to multi-hour professional work. Early results show leading models approaching expert performance across many categories, positioning the release as a way to measure AI by “economically valuable” output rather than narrow benchmarks.


Business Risk Perspective: GDPval reflects the growing incentives to swap human work for AI work—and the risks that come with it. Watching how models score here is important, but so is running your own evaluations to see what happens when the “valuable work” is your own.



AI Business Risk Weekly is a Conformance AI publication.  


Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page