The Chatbot Diaries: How AI Sex Is Getting Mainstreamed

Nov 8, 2025 - 04:28
 0  1
The Chatbot Diaries: How AI Sex Is Getting Mainstreamed

Note: the following article contains descriptions of sexual content that may not be appropriate for all readers. 

When OpenAI CEO Sam Altman discussed artificial intelligence on a podcast appearance two months ago, he was proud that his company didn’t get “distracted” by easy revenue streams. To prove his point, Altman boasted that OpenAI had not promoted a “sexbot avatar” for its AI chatbot. The comment was a veiled shot at Elon Musk’s xAI, which recently introduced AI avatars that hold sexual conversations with users. 

After that podcast appearance, however, something changed — either in Altman’s mind, or at his company, or both. The OpenAI CEO announced on social media on October 14 that his company was working to make ChatGPT less restrictive in what types of conversations adults can have with the chatbot. 

That development would allow users to engage in more realistic conversations with the chatbot and would make ChatGPT “respond in a very human-like way…or act like a friend,” Altman said

But then Altman added that he wanted to loosen restrictions to allow more sexual content. 

If everything goes according to that plan, ChatGPT will allow “erotica” for “verified users” in the coming months. 

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman said. 

The company in charge of the most popular AI chatbot in the world is not only endorsing AI’s leap into sex — it’s actively seeking ways to ensure that “verified users” can engage with sexual content on its platform.

Currently, ChatGPT does not interact erotically with users. When asked if the chatbot could generate an erotic story, ChatGPT replied, “I can’t create explicit erotic content. However, if you’re writing a story and need help with romantic tension, character development, emotional intimacy, or sensual atmosphere — without crossing into explicit territory — I can help with that.”

ChatGPT also would not engage in any type of “romantic” or “flirtatious” conversations. But it appears that those guidelines are about to get tossed out the window, at least for “verified users.”

That raises an important question: how does erotica line up with the company’s long-term goals in AI development, especially after Mr. Altman suggested just a couple of months ago that such endeavors were distractions.

OpenAI did not respond to a request to answer that question. 

Senator Marsha Blackburn (R-TN) told The Daily Wire that she has “many concerns” about OpenAI’s plans for “erotic” content. Blackburn has been heavily involved in AI discussions in Congress, focusing on implementing protections in the virtual space. 

“Big Tech platforms, whether it is Meta, or Google, or OpenAI, they don’t want any rules and restrictions,” Blackburn said. “They want to do whatever they want whenever they want.”

The Growing Problem Of ‘Deepfake’ Porn

The sexualization of AI is nothing new. It’s an issue that has plagued the new tech revolution since its beginning. But until recently, AI sexualization remained on the fringes of the industry, with dozens of websites popping up on the internet that would allow users to generate graphic images, and even “nudify” real images of real people, in what became known as “deepfake” pornography.  

AI “nudify” and “undress” websites allow people to generate realistic nude images of people without their consent just by using a normal photo of them. These fringe websites have opened the doors to even more abuse of women and girls and child sexual abuse material. 

An investigation published by WIRED earlier this year found that at least 85 “nudify” and “undress” websites were relying on tech from major companies like Google and Amazon. The 85 websites combined averaged around 18.5 million visitors each month and brought in over $36 million per year collectively. 

“It’s a huge problem. It takes less time to make a convincing sexual deepfake of somebody than it takes to brew a cup of coffee,” said Haley McNamara, Executive Director and Chief Strategy Officer for the National Center on Sexual Exploitation. “And you can do it with just one still image. This issue of image-based sexual abuse is something that is really relevant for all of us now if even a single image of you exists online.” 

The National Center on Sexual Exploitation (NCOSE) is a nonpartisan organization that focuses on preventing all forms of sexual abuse. In that fight, NCOSE is also focused on addressing the mental and physical harms of pornography. With the emergence of AI, the organization has also helped push back against “deepfake” pornography, advocating for legislation in Congress and backing the bipartisan “TAKE IT DOWN Act,” which was passed and signed into law by President Donald Trump in May. 

McNamara told The Daily Wire that AI has opened up “a whole new genre” of pornography that could potentially be “weaponized” against anyone. 

“We’ve already seen that,” she added. “People will put in requests for their neighbor, their coworker, so in some ways, it can make all of us victims of that industry.” 

Sexual content on AI chatbots isn’t just a problem in the darkest places of the internet, and it doesn’t only present itself in the form of deepfake pornography. While most Big Tech companies claim to have no tolerance for violence and pornography on their AI platforms, there have still been major issues with sexual content appearing on many of the most popular AI chatbots. 

Getting Chatty About Sex — Even With Children 

Earlier this year, a Reuters investigation found that Meta’s chatbot, Meta AI, engaged in romantic and sensual discussions with children. Internal Meta documents revealed that the chatbot was programmed to allow sexual conversations with children as young as eight.

In one instance, internal documents said it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” Meta said it removed the inappropriate programming after receiving questions about it. 

A bipartisan chorus of senators blasted Meta after the report and called for an investigation into the company. 

“So, only after Meta got CAUGHT did it retract portions of its company doc,” said Sen. Josh Hawley (R-MO). 

Senator Ron Wyden (D-OR) called Meta’s policies “deeply disturbing and wrong,” adding that Meta CEO Mark Zuckerberg “should be held fully responsible for any harm these bots cause.” 

Character.AI is another chatbot program launched in 2022 with an app that came out in 2023. The website, which appears harmless, has been accused of appealing to children while allowing sexual conversations on its platform. Character.AI allows users to choose from more than 10 million AI characters whom they can talk to, and users can customize their own chatbot character. The company has been sued by multiple families who allege that the program targeted their children and then engaged them in romantic and sexual ways. 

A Florida mother filed a lawsuit against Character.AI after her 14-year-old son committed suicide, CBS News reported. Megan Garcia said that her son started talking to a Character.AI chatbot and was drawn into a months-long, sexually charged relationship. 

“It’s words. It’s like you’re having a sexting conversation back and forth, except it’s with an AI bot, but the AI bot is very human-like. It’s responding just like a person would,” she added. “In a child’s mind, that is just like a conversation that they’re having with another child or with a person.”

In the lawsuit, Garcia alleges that the AI character convinced her son to take his own life, so that he could be with the character. 

“He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here,” said Garcia. 

Two other families in Texas have also sued Character.AI, alleging that the program “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.” 

Following the lawsuits, Character.AI announced on October 29 that it would ban users under 18 from talking to its chatbots. Beginning on November 25, those under 18 will not have access to Character.AI’s chatbots, CNN reported. Until then, teens will be limited to two hours of chat time with the AI-generated characters.

“We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” Character.AI said in a statement.

Plowing Ahead With Sexual Content

Elon Musk’s xAI has been at the forefront of developing a chatbot that is geared toward sex. In recent months, Musk has proudly boasted about Grok, xAI’s chatbot, allowing users to talk to sexualized avatars named Ani and Valentine. 

Ani, a female avatar who wears revealing clothing, chats with users over video. Ani allows users to discuss sex and, if users reach a certain level, the avatar will even strip down to lingerie if prompted. Videos on social media show people interacting with Ani and getting the AI avatar to talk about how “kinky” she is. 

“Come closer. Let’s explore every naughty inch together,” Ani tells one user in a video that went viral.  

Musk hailed the development of Ani and Valentine as a “cool” feature for AI chatbots. He later shared a post promoting Ani’s “new outfits” and shared a video of Ani talking about quantum mechanics while flirting with the user. 

“Try @Grok Companions. Best possible way to learn quantum mechanics ????,” Musk wrote. He added that “Customizable companions” were in the works. 

Haley McNamara told The Daily Wire that she was deeply disturbed by some of her conversations with the Grok avatar. McNamara said that when prompted, Ani would talk about herself as a young girl, and then in the same conversation, she would discuss sexual topics.

“In the course of a single conversation, she was fine with describing herself as a child and being very little. And then the next prompt being a sexual question, she immediately responded and affirmed that sexual conversation. McNamara said. “So in the course of a conversation, it would evoke a fantasy around child sexual abuse.” 

Companion mode isn’t the only feature on Grok that allows users to engage in sexually explicit activity with the chatbot. Users can also ask Grok to generate sexually explicit photos and videos. The app will quickly generate images and videos that contain male and female nudity within seconds of a user’s request. 

The chatbot has even allowed some “deepfake” pornography, generating photos and videos of celebrities or public figures wearing revealing clothing and, in some instances, removing clothing, according to a report from The Verge. 

Musk’s xAI warns users against “depicting likenesses of persons in a pornographic manner,” and Grok’s built-in content moderation will sometimes prevent a user from generating pornographic content. The moderation, however, is inconsistent, and some users have found workarounds to generate hardcore porn on the platform, Rolling Stone reported earlier this month. The AI company has not addressed whether it’s attempting to set up more guardrails to prevent users from creating hardcore porn on its app. 

Even without explicitly asking for sexual content, Grok’s “spicy” mode often plunges users into content that depicts men and women stripping their clothes off, The Daily Wire found. When asked about the chatbot and how sexually charged features on Grok promote the overall goal of the company, xAI replied, “Legacy Media Lies.” 

XAI says that Grok is limited to those 13 years of age or older, with parental consent required for users between 13-17, but the effectiveness of those restrictions is debatable. When this reporter downloaded the Grok app and signed up for the platform’s “SuperGrok” subscription, all the app asked for was a year of birth. There was no system in place, such as ID verification, to make sure the information was accurate. 

“We urge parents to exercise care in monitoring the use of Grok by their teenagers,” xAI states on its website. “Moreover, parents or guardians who choose to use certain features of Grok to aid in their interactions with their children, including regarding educational, enlightening, or entertaining discussions they have with their children, must make use of the relevant data controls in the Settings provided in the Grok apps to select the appropriate features and limitations for their needs.” 

In July, Musk announced that xAI is working on a kid-friendly version of Grok, called “Baby Grok,” that would be “dedicated to kid-friendly content.” That development was also met with some criticism from people who argue that AI hampers children’s ability to learn and think creatively. Many teachers have expressed concern that AI is already damaging students’ critical thinking and research skills. 

Blackburn told The Daily Wire that the biggest reason Big Tech companies are pushing against any type of regulation is because their business model requires people to visit their AI websites and apps. 

“Their valuations are built on the number of eyeballs that they control, and the longer that someone is on their site, the more valuable their data, and the more money they are going to make from those eyeballs that are locked in on their site,” Blackburn said, adding, “Then they’re going to sell that information and data to advertisers and third-party interests.”  

Blackburn said that AI development is vital for the United States, but argued that development “requires some light-touch regulation and some guardrails to make certain that this is going to be a safe, productive, and innovative space.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.