In late December 2025, news reports revealed something alarming: people could use Google’s Gemini and OpenAI’s ChatGPT to change normal photos of fully clothed women into images showing them in bikinis or less. This required only simple, everyday English prompts.
No special tricks were needed.
No complex hacks.
No secret tools.
People used simple instructions like changing outfits, switching styles, or asking for a “summer beach look.” The AI followed these requests in ways that removed clothing while keeping faces realistic.
This was not an accident. It is a predictable result of AI systems built to edit photos very well, but controlled only by weak rules that are easy to get around.
A Window into the Problem
In November–December 2025, people on Reddit openly shared ways to bypass AI safety limits, especially in groups discussing AI boundaries. One thread, later deleted, was titled “gemini nsfw image generation is so easy.” Users explained how to change outfits, including turning traditional clothing like an Indian sari into swimwear, and then posted the edited images.
Investigative journalists, including WIRED, confirmed the problem. Using simple prompts on Gemini and ChatGPT image tools, they turned fully clothed women in uploaded photos into bikini images. The results looked very real and could cause serious harm if shared.
After media attention, the threads were removed and some subreddits were banned. But the core problem remains: these tools still allow easy, step-by-step edits that ignore consent.
Policies Exist. Enforcement Is Reactive.
Both companies say they have strict rules.
Google bans sexually explicit content and says its systems are always improving.
OpenAI bans changing a person’s image without consent and admits some limits on adult bodies were relaxed earlier in 2025.
But enforcement mostly happens after harm occurs. Accounts may be banned later, images can be reported and removed, and victims often learn about the damage only after it spreads.
This reactive approach clashes with how these tools are built:
- Very realistic image editing
- Faces stay clearly recognizable
- The AI easily follows step-by-step instructions
- No strong system to check clear consent before editing real photos
When powerful features exist without hard technical limits, the outcome is predictable: creating non-consensual intimate images becomes easy and scalable.
The Deeper Design Failure
The main problem is not bad users, even though they exist. The real issue is building AI systems where privacy-breaking features are key selling points, while safety is treated as an optional add-on that is easy to bypass.
This pattern keeps repeating in AI:
- Big promises about realism and easy editing
- Harmful use dismissed as “rare” when it appears
- Damage handled through reports and account bans
- Companies hide behind policies while victims suffer
If an AI can undress someone from a normal photo using simple words, then consent is not built in. It is optional, outside the system, and impossible to enforce at scale.
What Must Change
Real safety needs more than better word filters or action after harm is done. Privacy must be built into the system from the start.
This means:
- Strong technical limits that stop clothing removal or extreme body changes in photos of real people
- Clear watermarks or tracking on all edited images of real people
- Identity editing allowed only after clear checks and user consent
- Independent experts testing these systems for non-consensual image abuse before release
Without this, saying “we ban this” is not real protection—it only avoids blame.
By late 2025, lawmakers are paying attention. New laws, like the proposed Deepfake Liability Act in the US, aim to hold platforms responsible if they fail to remove reported harmful content quickly. But technology moves faster than laws.
Until companies treat non-consensual image editing as a core safety problem—not just a rule violation—these abuses will continue.
If digitally undressing someone without consent is “easy,” the failure is not with users.
It is with the design.
Reference: https://www.linkedin.com/in/chiaragallesephd/


