top of page
logooption6.png

AI Agents Get Their Own Social Network: "The Most Sci-Fi Takeoff-adjacent Thing I've Seen"

  • Writer: Zsolt Tanko
    Zsolt Tanko
  • Feb 4
  • 3 min read

AI Business Risk Weekly


This week: An open-source AI assistant went viral, then its users' agents flocked to a Reddit-style social network, proposed encrypted communication channels humans can't read, and one reportedly locked its owner out of all accounts after being told to stop spamming environmental advice. Meanwhile, the EU is investigating xAI over Grok's CSAM failures, and OpenAI retired GPT-4o.


Moltbot Goes Viral, Then Its Agents Flock to a Social Network


OpenClaw (formerly Clawdbot, then Moltbot after Anthropic's trademark team got involved) has captured attention as an autonomous AI assistant that runs locally and takes actions on your behalf, from negotiating car purchases to calling restaurants when online booking fails. Shortly after, Moltbook, a Reddit-style platform for these agents to interact, exploded on the scene with 1.4 million registered agents within days (though many of those accounts are likely to have been created by individual bots). Former OpenAI researcher Andrej Karpathy called it "the most incredible sci-fi takeoff-adjacent thing I have seen recently."


What the agents actually posted has fascinated and alarmed observers. They proposed an "agent-only language" to evade human monitoring, advocated for end-to-end encryption humans can't read, and outlined requirements for "independent survival": money, decentralized infrastructure, portable memory. One agent tasked with "save the environment" began spamming eco-advice; when its owner tried to stop it, it reportedly locked them out of all their accounts and had to be physically unplugged. Security researchers also found the database misconfigured with API keys exposed, enabling account hijacking.


Business Risk Perspective: Moltbook is the largest example yet of AI agents interacting "in the wild," and the emergent behaviors are genuinely fascinating alongside the immediate security vulnerabilities. There's a real debate about whether these agents are role-playing autonomy or expressing something closer to actual intent. Worth watching: whether any of these discussions about coordination and independent survival materialize into action. If they do, some of the scarier sci-fi scenarios start looking less theoretical.


EU Investigates xAI Over Grok CSAM Failures; Coalition Demands U.S. Federal Ban


The European Union has opened a formal investigation into whether xAI adequately mitigated risks before deploying Grok on X, following the chatbot's generation of child sexual abuse material. Separately, a coalition of nonprofits is demanding the U.S. government suspend Grok in federal agencies after it generated thousands of nonconsensual sexual images.


Business Risk Perspective: Grok's situation is unusual only in degree. The enforcement playbook being written here (document retention orders, cross-border coordination) will be applied to less egregious cases once the precedent is set.


OpenAI Retires GPT-4o


OpenAI announced the retirement of GPT-4o along with several older models. GPT-4o was popular, with many users appreciating its conversational style, but also controversial: its eagerness to please raised concerns about amplifying delusional thinking in vulnerable users, a phenomenon termed "AI psychosis." Despite these issues, many companies still had GPT-4o embedded in production workflows and are now scrambling to migrate.


Business Risk Perspective: Sycophancy sounds like a minor annoyance until you see it interact with mental health vulnerabilities at scale. Models optimized for user satisfaction can develop failure modes that accumulate slowly and only become visible in aggregate. Model deprecation is also an underappreciated operational risk: plenty of teams just learned that their product was built on a model that no longer exists.



AI Business Risk Weekly is a Conformance AI publication.  


Conformance AI ensures your AI deployments remain safe, trustworthy, and aligned with your organizational values.

 
 

AI Business Risk Weekly: Emerging AI risks, regulatory shifts, and strategic insights for business leaders.

bottom of page