AI-generated illustration showing the digital manipulation of a woman’s image on X. Gemini AI
News

Grok, ‘bikini’ prompts, and the casual dehumanisation of women

Grok’s bikini prompts aren’t jokes or experiments. They reveal how non-consensual sexualisation thrives on platforms that reward humiliation, and why women’s visibility still invites punishment.

Written by : Lakshmi Priya

“The sexualisation of women is only appealing if it’s non-consensual. Otherwise, it’s ‘sluttiness’,” American writer and comedian Lindy West wrote in a 2013 essay. More than a decade later, that observation feels less like commentary and more like diagnosis.

On X (formerly Twitter), a section of users have been using the platform’s AI chatbot Grok to digitally undress women whose photographs are publicly available online, replacing their clothes with bikinis or other sexualised imagery. The prompts (“@grok remove her clothes and put her in a bikini”) are often typed casually, even flippantly, beneath the original photographs, inviting Grok to comply while others look on. 

The output is not framed as pornographic, nor is it used to extort, as such ‘morphed’ images typically are. Rather, it is meant for humiliation, generated “for fun”, rooted in the knowledge that the women depicted did not consent. And the lack of consent here, it would seem, is precisely the point.

What Grok has made visible now is not a new sexual appetite, but an old power dynamic that has found a new method of automation. The thrill here does not lie in desire, but in domination. It lies in the ability to take a woman’s image, strip it of context and agency, and return it to her, and to the public, as an object.

xAI, the artificial intelligence company founded by Elon Musk who now owns X, has since acknowledged that Grok’s safeguards failed in a visible way. The chatbot publicly conceded that “images depicting minors in minimal clothing” were generated after user prompts, and said improvements were underway to block such requests entirely. xAI’s response leaned on a familiar corporate alibi that “no system is 100% foolproof.”

When large numbers of users independently fall upon the same act of violation, it becomes clear that this is not fringe or isolated behaviour. It is now platform culture. 

Why clothes are the first to go

In public life, clothing is not incidental. It signals legitimacy for a person, often even the role they play in society as a professional, whether they be an actor, a journalist, or a student. Stripping their clothes away and replacing them with swimwear or fetishised outfits collapses that legitimacy into exposure. The aim is not to make the women they are targeting attractive by making them wear “sexualised” clothes. It is about making them interruptible, reminding them that no matter what role they occupy, their bodies remain accessible for alteration and display.

This is also why the counter-argument that “men are sexualised too” misses the mark. Of course, there are bikini-clad photos of Elon Musk and Ben Affleck doing the rounds. But the platform, and society at large, operates such that sexualised images of men become simply parody, satire, or at worst, an insult. They do not fundamentally threaten men’s authority or safety.

On the other hand, sexualised images of women circulate as reduction, as statements about what the women are ‘really’ for. In the Indian context, that reduction carries real consequences, ranging from stalking and threats to professional retaliation and family surveillance. Women are left to explain, deny, and justify themselves to employers, relatives, and audiences, while those generating or amplifying the images remain anonymous and insulated from consequence.

As the trend has drawn criticism, a familiar backlash has followed — one that lays the blame squarely at women’s feet. 

Across X, users have argued that women “invite” such abuse by posting photographs at all, that modesty would protect them, that women who sell sexual content or appear on platforms like OnlyFans for money somehow forfeit the right to consent elsewhere. The contradiction is revealing. The same men who digitally undress women without consent often condemn women who choose to monetise or control their own sexuality. As Lindy West pointed out, sexualisation, it seems, is acceptable only when it operates as punishment, denying women autonomy and reducing them to objects on someone else’s terms.

This logic collapses under even minimal scrutiny. Many of the women targeted by Grok prompts are “modestly dressed” by society’s standards, or professionally presented. More importantly, saying “be modest” ignores how harm is experienced structurally.

A user pointed out that after she posted a fully-clothed picture of a south Indian actor, her replies were flooded with repeated prompts asking Grok to strip or sexualise them, often in increasingly graphic ways. The escalation itself is part of the spectacle, with users testing not just the AI’s limits, but the audience’s tolerance. 

In worse cases, users have pushed prompts involving toddlers.

Platforms often respond to such cases by treating child sexual abuse material as a separate moral category, an aberration distinct from misogyny. But this distinction has been long challenged. Feminist Andrea Dworkin warned decades ago that a culture which eroticises women’s subordination cannot meaningfully protect children, because both are governed by the same logic of entitlement. 

The chilling effect no one measures

What is perhaps most damaging is the chilling effect this produces. Women already self-censor online, limiting profile pictures, locking accounts, and avoiding visibility. AI-enabled sexualisation raises the cost of participation further. For many people, including journalists, activists, students, and public figures, being visible is not optional but integral to their work. So when visibility comes with the constant threat of sexual violation, the message is unmistakable. “Speak at your own risk,” they are saying. It is an attempt at emptying public spaces, not through formal bans but through exhaustion.

Even if one was to retaliate via the legal route, here too there are gaps. 

India’s existing legal framework was not designed for a world of synthetic media. Under the Information Technology Act, 2000, Sections 66E, 67, and 67A criminalise publication of obscene material and non-consensual images, and Section 66D covers cheating by impersonation. But none of these provisions explicitly address the creation or dissemination of AI-generated content, leaving enforcement agencies to interpret outdated statutes against rapidly evolving technology. The Digital Personal Data Protection Act, 2023 emphasises consent in data processing, but similarly does not explicitly cover AI-generated visuals. In practice, victims often have to cobble together a case using a mix of cybercrime, privacy, and defamation laws, none of which account for automated, large-scale manipulation.

This lack of legal clarity has real consequences. In the Rashmika Mandanna deepfake case, where a video of one woman was manipulated to look like the popular actor and shared widely, there were widespread calls for legal action, pointing to the public anxiety about identity theft and harm. But as legal experts have pointed out, India still lacks specific deepfake laws, with existing statutes treating harmful content on a case-by-case basis rather than addressing the underlying technology directly.

There are signs of movement. A parliamentary panel has recommended tougher rules for deepfakes, OTT platforms, and social media. The government has proposed rules that would require AI-generated content to be clearly labelled, and specify mechanisms for takedowns only by designated officials. And on January 3, the Union government directed X to remove all obscene, indecent, sexually explicit, and unlawful content, especially that generated by Grok, warning that failure to comply could have legal consequences. 

These are important steps, but without laws that explicitly recognise and penalise AI-enabled non-consensual manipulation, victims remain reliant on slow, opaque takedown processes, or on filing individual complaints that can retraumatise them. Meanwhile, platforms often respond with evasive corporate language rather than substantive action, as seen in xAI’s dismissive reply to Reuters when asked for comment. “Legacy Media Lies,” it reportedly said.

The scenario also leaves us with more questions. Will enforcement from the part of the government focus on holding platforms accountable, or will it once again place the burden on individual victims to pursue criminal complaints — a process that often leads to further exposure and retaliation? Regulation matters, but regulation that does not centre consent, survivor safety, and platform responsibility risks becoming symbolic.

As writer Victoria Smith has noted, the harm of non-consensual AI imagery lies not only in its realism, but in its ease of creation and the institutional indifference that surrounds it. Once an image exists, it cannot be recalled. Once it is downloaded, it cannot be erased. Victims often find that speaking up only multiplies the number of images created in their name. The technology cannot be put back in the box.

The problem, then, is not merely technical. It is ideological. It rests on a belief — increasingly visible across the manosphere and now drifting into the mainstream — that women’s autonomy is a provocation, that female visibility is a form of power that must be crushed. As one woman responding to the Grok trend put it: men rage at women’s attractiveness, autonomy, and unattainability, and seek to punish it through humiliation.

AI did not invent this rage. But by embedding it into automated systems, platforms like X have made it scalable. The question is not whether women should be more modest, quieter, or less online, but why our technologies continue to be built, defended, and normalised in ways that assume women’s degradation is inevitable.

If AI shows us anything, it is not the future. It is how little we have moved on.

Views expressed are the author’s own.