top of page
logooption6.png

How a $375M Child Safety Verdict Reshapes Liability for the Entire Tech Industry

  • Writer: Corvin Binder
    Corvin Binder
  • Mar 31
  • 3 min read

After decades of legal immunity, the public is holding tech companies accountable for psychological harm to children. Two verdicts in the last week of March show that the tide has turned, and the legal reasoning behind these cases applies more clearly to AI than it did to social media.


The social media reckoning


On March 24, a New Mexico jury ordered Meta to pay $375 million for child safety failures on Instagram. The jury found two things: that Meta's public safety claims were fraudulent, and that the platform's design took "grossly unfair advantage" of children. One day later, a Los Angeles jury found Meta and Google negligent in the first ever social media addiction trial to reach a verdict, with a finding of "malice, oppression, or fraud" that triggered punitive damages.


For three decades, Section 230 of the Communications Decency Act shielded US internet platforms from liability for content posted by their users. The statute treated platforms as intermediaries rather than publishers, and until now it was the technology industry's most reliable legal defense. These verdicts are the first sign that courts are no longer willing to extend tech companies the benefit of the doubt they once took for granted.


A new legal theory made both verdicts possible. Courts targeted the platform design instead of the content. Algorithmic recommendations and engagement-maximizing features built without meaningful age verification were deemed negligent, and section 230 provided no protection.


AI's weaker legal footing


Social media companies were founded at a time when the public held an optimistic attitude towards tech, and when the psychological harms became clear, they could claim they were just hosting other people's content.


We now understand the child safety risks that highly engaging apps can pose. For AI deployers the situation is less favorable than for social media: when a chatbot responds to a user, it is authoring new content, meaning that social media's go-to defense of only being an intermediary doesn’t apply at all.


In May 2025, a federal judge ruled that an AI chatbot is a product, not a platform (Garcia v. Character Technologies). The case involved a 14-year-old who died by suicide after months of interaction with a Character.AI chatbot. The court separated the application's design from its conversational content, and held that defects in the product are fair game for liability claims. This ruling set the legal standard for AI.


Where things stand now


Since Garcia, five wrongful death suits against Character.AI and Google have settled. OpenAI faces eight or more active wrongful death suits. In one of them, its own moderation system flagged 377 self-harm messages from a 16-year-old, yet no safety mechanism activated. 

The first wrongful death claim against Google Gemini was filed this month.


Forty-two state attorneys general have demanded AI safeguards by January 2026, Kentucky has already sued, and six states have introduced chatbot-specific child safety legislation.


The limits of the law


The child safety risks are severe, and the litigation and a regulatory response is warranted. Unfortunately, regulation written after a tragedy tends to be technically naive and performative, creating compliance theater more than actual safety. The AI industry understands these systems better than regulators, and the technical depth required to assure meaningful safety cannot be captured by regulation alone.


Things will only get better if the industry notices, cares, and does the work.


What the standard of care looks like


The cases now in court follow the same failure pattern. Companies tested insufficiently before deployment and failed to act when their own systems raised alarms in production.

The emerging standard of care has three components:


  1. Pre-deployment evaluation means testing against the specific harm scenarios the product faces before users encounter it, so that evidence of reasonable care exists before anything goes wrong.


  2. Production monitoring with escalation means ongoing analysis of interactions connected to intervention protocols. A system that detects harm but takes no action offers little legal protection.


  3. Technical remediation means fixing what testing and monitoring uncover, because a known defect left unaddressed is evidence of conscious disregard.


Companies that build this infrastructure reduce their legal exposure. They also become easier to trust for enterprise customers evaluating vendor risk and for investors trying to make responsible bets. Most importantly, they build safer systems that protect their most vulnerable users.


The case for acting now


Social media had decades before accountability arrived, decades that AI won’t get. The risks to child safety are already clear, and legislation and litigation are moving fast. Meta’s $375 million outcome is an example of the cost of d

elay. For AI, the industry will do well to build safety infrastructure now and not later.




Conformance AI provides AI safety and compliance services. This analysis reflects our perspective as industry participants; it is not legal counsel.


 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page