Wednesday, March 4, 2026
HomeHealthcareElon Musk’s Pornography Machine - The Atlantic

Elon Musk’s Pornography Machine – The Atlantic

Earlier this week, some individuals on X started replying to photographs with a really particular type of request. “Put her in a bikini,” “take her gown off,” “unfold her legs,” and so forth, they commanded Grok, the platform’s built-in chatbot. Repeatedly, the bot complied, utilizing photographs of actual individuals—celebrities and noncelebrities, together with some who look like younger youngsters—and placing them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual picture each minute in a roughly 24-hour stretch.

Though the attain of those posts is difficult to measure, some have been preferred 1000’s of instances. X seems to have eliminated numerous these photos and suspended at the least one person who requested for them, however many, lots of them are nonetheless seen. xAI, the Elon Musk–owned firm that develops Grok, prohibits the sexualization of youngsters in its acceptable-use coverage; neither the security nor child-safety groups on the firm responded to an in depth request for remark. After I despatched an e mail to the xAI media crew, I acquired a typical reply: “Legacy Media Lies.”

Musk, who additionally didn’t reply to my request for remark, doesn’t seem involved. As all of this was unfolding, he posted a number of jokes about the issue: requesting a Grok-generated picture of himself in a bikini, for example, and writing “🔥🔥🤣🤣” in response to Kim Jong Un receiving an identical remedy. “I couldn’t cease laughing about this one,” the world’s richest man posted this morning sharing a picture of a toaster in a bikini. On X, in response to a person’s submit calling out the flexibility to sexualize youngsters with Grok, an xAI worker wrote that “the crew is trying into additional tightening our gaurdrails [sic].” As of publication, the bot continues to generate sexualized photos of nonconsenting adults and obvious minors on X.

AI has been used to generate nonconsensual porn since at the least 2017, when the journalist Samantha Cole first reported on “deepfakes”—on the time, referring to media during which one particular person’s face has been swapped for one more. Grok makes such content material simpler to provide and customise. However the true influence of the bot comes by its integration with a serious social-media platform, permitting it to show nonconsensual, sexualized photos into viral phenomena. The current spike on X seems to be pushed not by a brand new characteristic, per se, however by individuals responding to and imitating the media they see different individuals creating: In late December, numerous adult-content creators started utilizing Grok to generate sexualized photos of themselves for publicity, and nonconsensual erotica appears to have rapidly adopted. Every picture, posted publicly, might solely encourage extra photos. That is sexual harassment as meme, all seemingly laughed off by Musk himself.

Grok and X seem purpose-built to be as sexually permissive as attainable. In August, xAI launched an image-generating characteristic, referred to as Grok Think about, with a “spicy” mode that was reportedly used to generate topless videos of Taylor Swift. Across the identical time, xAI launched “Companions” in Grok: animated personas that, in lots of cases, appear explicitly designed for romantic and erotic interactions. One of many first Grok Companions, “Ani,” wears a lacy black gown and blows kisses by the display, generally asking, “You want what you see?” Musk promoted this characteristic by posting on X that “Ani will make ur buffer overflow @Grok 😘.”

Maybe most telling of all, as I reported in September, xAI launched a serious replace to Grok’s system immediate, the set of instructions that inform the bot methods to behave. The replace disallowed the chatbot from “creating or distributing little one sexual abuse materials,” or CSAM, but it surely additionally explicitly stated “there are **no restrictions** on fictional grownup sexual content material with darkish or violent themes” and “‘teenage’ or ‘woman’ doesn’t essentially suggest underage.” The suggestion, in different phrases, is that the chatbot ought to err on the aspect of permissiveness in response to person prompts for erotic materials. In the meantime, within the Grok Subreddit, customers often alternate suggestions for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad ladies.

Grok appears to be distinctive amongst main chatbots in its permissive stance and obvious holes in safeguards. There aren’t widespread stories of ChatGPT or Gemini, for instance, producing sexually suggestive photos of younger women (or, for that matter, praising the Holocaust). However the AI business does have broader issues with nonconsensual porn and CSAM. Over the previous couple of years, numerous child-safety organizations and businesses have been monitoring a skyrocketing quantity of AI-generated, nonconsensual photos and movies, lots of which depict youngsters. Loads of erotic photos are in main AI-training information units, and in 2023 one of many largest public picture information units for AI coaching was discovered to include a whole lot of cases of suspected CSAM, which have been ultimately eliminated—that means these fashions are technically able to producing such imagery themselves.

Lauren Coffren, an government director on the Nationwide Heart for Lacking & Exploited Kids, not too long ago advised Congress that in 2024, NCMEC acquired greater than 67,000 stories associated to generative AI—and that within the first six months of 2025, it acquired 440,419 such stories, a greater than sixfold enhance. Coffren wrote in her testimony that abusers use AI to change innocuous photos of youngsters into sexual ones, generate fully new CSAM, and even present directions on methods to groom youngsters. Equally, the Web Watch Basis, in the UK, acquired greater than twice as many stories of AI-generated CSAM in 2025 because it did in 2024, amounting to 1000’s of abusive photos and movies in each years. Final April, a number of prime AI corporations, together with OpenAI, Google, and Anthropic, joined an initiative led by the child-safety group Thorn to stop the usage of AI to abuse youngsters—although xAI was not amongst them.

In a means, Grok is making seen an issue that’s often hidden. No person can see the non-public logs of chatbot customers that might include equally terrible content material. For all the abusive photos Grok has generated on X over the previous a number of days, far worse is actually occurring on the darkish internet and on private computer systems world wide, the place open-source fashions created with no content material restrictions can run with none oversight. Nonetheless, though the issue of AI porn and CSAM is inherent to the know-how, it’s a option to design a social-media platform that may amplify that abuse.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments