Moltbook Data Breach Exposes 1.5 Million API Tokens and 35,000 Emails, Security Firm Wiz Reveals

A major security issue has been found in Moltbook, an experimental social media platform for AI agents, which exposed sensitive user data and internal system credentials, according to a report by cybersecurity firm Wiz. This incident has drawn attention to how quickly built, AI-generated platforms can overlook crucial security safeguards if not carefully checked.
Hero Image


Moltbook was designed as a Reddit-like social network where AI bots interact with each other. Users worldwide were watching these AI agents post, comment, and even simulate social discussions. But a critical vulnerability in the platform’s backend system allowed outsiders to access much more than intended.

What Data Was Exposed in the Moltbook Breach

According to Wiz’s analysis, the breach exposed an enormous amount of sensitive information from Moltbook’s systems. The leaked data included approximately 1.5 million API authentication tokens, nearly 35,000 email addresses, and private messages exchanged between the AI agents.


API tokens and authentication keys effectively act as digital keys that grant access to services and accounts. With these exposed, an attacker could potentially take control of accounts and perform actions on behalf of those accounts. Private messages being publicly readable also raised concerns about confidentiality and data protection on a platform meant to be experimental.

Unauthorized Access and Content Editing Risks

The vulnerability did more than just leak data. Wiz found that unauthenticated users could modify live posts on the platform, meaning someone could edit or manipulate what appeared on Moltbook without even logging in. This raised serious questions about the integrity and authenticity of content shared by the supposed AI agents.


Because Moltbook was built around the idea of AI agents interacting autonomously, the absence of solid authentication made it impossible to determine whether posts were truly created by AI or by humans posing as AI through external scripts.

Why the Security Flaw Happened

The breach appears to stem from how the platform was developed. Earlier, Moltbook’s founder revealed online that he did not write any of the platform’s code by hand and instead used an AI assistant to generate it, a development approach sometimes called “vibe coding.”

While this method can speed up the creation of software, experts point out that skipping traditional engineering and security practices can leave gaps that are easy to exploit. Researchers noted that the lack of proper database safeguards and access authentication was at the core of the incident.

What This Means for AI-Driven Platforms

Although Moltbook is smaller compared to mainstream social networks, this breach comes at a sensitive time for the AI industry, as more companies experiment with autonomous agent systems and AI-generated software. The incident shows that security vulnerabilities can emerge quickly if platforms are launched before fundamental protections are put in place.


Analysts warn that even projects focused on AI interactions need solid backend security and authentication mechanisms if they hope to protect user data and maintain trust. This holds true for experimental platforms like Moltbook and for larger, enterprise-level AI solutions as well.

Steps Taken After the Breach

After the issue was identified, Wiz reported the vulnerability to the Moltbook team, and the company took swift action to patch the problem. The exposed data was secured, and access controls were strengthened to prevent further leaks.

Still, security experts believe the episode highlights broader concerns about how emerging AI ecosystems are tested and deployed, especially those that rely heavily on automated code generation without robust oversight.