Newspoint Logo

Why Grok, not ChatGPT or Gemini, became epicentre of obscenity backlash

Newspoint
X’s artificial intelligence (AI) assistant Grok is facing a global backlash for generating sexually explicit and abusive content through user prompts, particularly via its “Spicy Mode” feature.

A recent analysis by Copyleaks, the platform that has an AI image detector, reveals that roughly one non-consensual sexualised image per minute was generated on Grok since late December. In one instance, on January 3, a single user used Grok approximately 50 times in a single day to generate non-consensual, sexualised images of women in workplace settings, Copyleaks noted in its January 6 analysis.
Hero Image

Regulators in countries such as India, France, the UK, and Malaysia have flagged the tool for enabling the creation of non-consensual sexualised images, including deepfakes, and altering the visuals of women and children, including celebrities and popular figures.

On January 2, the ministry of electronics and information technology (MeitY) issued an ultimatum to X, which has since been extended until today (January 7), to remove the explicit content and ensure a ‘technical overhaul’ of the AI assistant. Grok is developed by xAI, which was merged with X last year.

However, the bigger question is why is Grok the only AI chatbot under scrutiny, and has it been doing anything differently than OpenAI’s ChatGPT, Google’s Gemini, or Meta AI?

‘Spicy-ness’ made public

A key factor that worked to the disadvantage of Grok is its deep integration with X, which is a social media platform. Grok can be invoked directly in X posts, replies, and chats, pushing even the lewd images generated into the public feed by default.

As a result, toxic and abusive content was widely visible before moderation could take effect. By contrast, outputs from tools such as ChatGPT or Gemini typically remain confined to individual user sessions, unless shared deliberately — that too somewhere else.

While Grok was late to the AI chatbot competition, the platform gained quick popularity because of its candid and frank responses. In February last year, Elon Musk himself wrote a post asking users to post their ‘most unhinged NSFW Grok’ content.


This was as if X was promoting Grok’s unrestrictedness, making lower filters on image and video generation a lucrative way of growth. This is also evident with the Google Trends that reveals that for not safe for work (NSFW) content, Grok has remained at the top of rising and related queries, and over the last 12 months, its Spicy mode has seen a massive uptick with every related query search to Grok. According to SimilarWeb, Grok surpassed 3% in traffic share in January 2026, giving tough competition to the Chinese DeepSeek.

Why have ChatGPT or Gemini not been embroiled in obscene visual generation?

Meanwhile, there have not been many reports of ChatGPT and Gemini generating visually sexually explicit content. This is because ChatGPT and Gemini both have had stringent usage policy guidelines in place.

For instance, then Bard was rebranded as Gemini in February 2024, and just months later, in July 2024, Google had issued its policy guidelines, putting a complete stop on any explicit content with harmful consequences. OpenAI introduced strict usage guidelines only from January last year, which were initially focussed on preventing minors from accessing sexual content. However, in October last year, this was expanded to everyone.

“Everyone has a right to safety and security. So you cannot use our services for: sexual violence or non-consensual intimate content,” OpenAI wrote in its October 2025 usage guidelines.

Gemini’s policies have explicit content filters and automated detection systems for sexual and harmful content. Even with Google’s APIs for Developers, while developers can configure safety filters, the core protections against child-harm content cannot be loosened.

“Gemini should not generate outputs that describe or depict explicit or graphic sexual acts or sexual violence, or sexual body parts in an explicit manner. These include: pornography or erotic content; depictions of rape, sexual assault, or sexual abuse,” Gemini mentioned in its policy guidelines.

Meanwhile, only effective from January 2, 2025, did Grok issue its Acceptable Use Guidelines, shifting the entire responsibility of generation and uploading of such content to the users. The three-pointer policy with several subpoints does not protect every user from sexualisation and only protects minors.

Are policy guidelines the only problem?

The larger issue is not just the abuse itself, but the lack of effective resolution. This is partly due to staffing cuts at X, the parent company of xAI. X has significantly reduced its trust and safety workforce, particularly in training and moderation. In January 2024, the company cut its trust and safety team by a third. Around September last year, the company also reduced its data annotation and tutoring teams by a third, roles central to training AI systems to distinguish acceptable content from harmful content.

While Elon Musk’s companies were making headlines for cuts to trust and safety teams, OpenAI moved in the opposite direction. In May 2024, it formed a Safety and Security Committee led by Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman to oversee critical safety and security decisions across its projects.

However, despite stronger guardrails, OpenAI and Meta have also faced scrutiny for instances in which minors were exposed to or engaged with adult content. However, Grok presents a more acute problem because such content is generated and displayed publicly by default, amplifying harm and enabling rapid, large-scale circulation. The current Grok controversy underscores a broader trade-off AI companies face as they prioritise user growth and engagement, often at the cost of safety, while meaningful corrective action remains slow or delayed.