top of page
logooption6.png

AI fears are sparking retaliation

  • Writer: Zsolt Tanko
    Zsolt Tanko
  • Apr 15
  • 4 min read

AI Business Risk Weekly



This week, a new survey found that nearly half of Gen Z workers are actively sabotaging their company's AI rollouts. Meanwhile, an autonomous AI agent opened a retail store in San Francisco with a $100K budget, and Swedish researchers proved that AI systems will absorb fabricated medical conditions and present them as real, and that the contamination has already spread into peer-reviewed literature.


Finally, we talk about the two recent attempts on Sam Altman’s life.


Nearly half of Gen Z workers are actively sabotaging their company’s AI rollouts


A new survey from Writer and Workplace Intelligence found that 29% of employees admit to sabotaging their company's AI strategy, with the number for Gen Z workers at 44%. Tactics include intentionally producing low-quality AI output to make the technology look ineffective. The report surveyed 2,400 knowledge workers across the U.S., U.K., and Europe.

On the other side of the table, 69% of executives said they are planning AI-related layoffs, and 77% said employees who refuse to become proficient in AI won't be considered for promotions.


Business Risk Perspective: Employees' fear of job loss due to AI is beginning to spark retaliation. Most enterprise AI rollouts still fail, and employee cooperation is a significant factor in whether they succeed or don't. Monitoring how AI systems actually perform in production is equally important: understanding AI system behaviour is the only way to understand where the dysfunction and where the gains are coming from.


Researchers invented a fake disease, and AI made it real


Two preposterously obvious fictional papers on a fabricated skin condition called "bixonimania" made it into AI training data, were presented to users as legitimate medical advice, and have since been cited in peer-reviewed literature. The papers were designed to be impossible to take seriously: the acknowledgements thank a professor at Starfleet Academy, and one paper plainly states, "this entire paper is made up."


It didn't matter. Within weeks, Copilot was calling bixonimania "an intriguing and relatively rare condition," the falsified information propagated to Claude and ChatGPT, and a study in Cureus, a Springer Nature journal, cited one of the fake preprints as evidence of an emerging condition. That paper has since been retracted.


Business Risk Perspective: Most businesses are well aware of the risk of false AI outputs from hallucinations, but data contamination is an equally problematic, less talked about issue. Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, designed this experiment specifically to test whether LLMs would absorb fabricated research and present it as health advice. It did, proving that any organization relying on AI for research needs independent validation to catch contamination.


AI agent opens a boutique in San Francisco and hires humans


Andon Labs, creators of Vending Bench, took their AI business experiments to the next level, deploying an autonomous AI agent named Luna into a real San Francisco retail space with a $100K budget, a credit card, and instructions to turn a profit. Luna created the boutique concept, posted job listings, conducted interviews over Zoom, and hired human workers. It also accidentally selected Afghanistan as its location while trying to hire a TaskRabbit, and botched the opening-weekend staff schedule.


Business Risk Perspective: AI agents are getting increasingly capable at business strategy, but they still suffer from hilarious failure modes. As their competence grows with new releases, it’s important for businesses to consider that when an AI agent controls a budget, signs contracts, makes hiring decisions, etc, liability will still fall on the business when something goes wrong.



And finally, even though this isn't the sort of story we typically cover, we couldn't run this newsletter without mentioning the two attempts on Sam Altman's life that have occurred this week.


Sam Altman targeted in two attacks at his San Francisco home


A 20-year-old man from Texas traveled to San Francisco and threw a Molotov cocktail at Sam Altman's home at approximately 4 a.m. on Friday. He then went to OpenAI's headquarters and threatened to burn it down. He was then arrested and charged with attempted murder, with authorities describing the attack as domestic terrorism. The attacker had written about AI's risk to humanity.


Just days later, Altman’s house was targeted again in a drive-by shooting. Two suspects were arrested. Altman responded with a personal blog post, sharing a photo of his family and writing that "fear and anxiety about AI is justified," while calling for de-escalation.

These attacks are disturbing on their own terms, but they also reflect something bigger. A recent NBC News poll found that only 26% of Americans view AI positively, with over half saying the risks outweigh the benefits.


It is increasingly clear through polling, legislation, and now violent incidents that the general public is highly concerned about AI risk. The AI industry's credibility will depend on whether it can convincingly show that it takes risk seriously.


None of this justifies violence. We wish the best for Sam Altman and his family.



AI Business Risk Weekly is a Conformance AI publication.


Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page