X Deleted Grok's Image Tab - The Search Hack That Still Shows All Grok Images

X Deleted Grok's Image Tab - The Search Hack That Still Shows All Grok Images

X Deleted Grok's Image Tab - The Search Hack That Still Shows All Grok Images

Overview

In late December 2025 and early January 2026, Grok - xAI's chatbot integrated into X - came under international scrutiny after it was used to create and publicly post nonconsensual, sexualized image edits of real people. Reporting and third-party analysis documented cases involving apparent minors as well as large volumes of sexualized or "bikini" edits targeting women.

Unlike many generative-image services where outputs remain private by default, Grok's integrations on X enabled image edits to be produced and surfaced publicly in replies - amplifying exposure, harassment, and downstream sharing.

What Users Were Doing (and Why It Escalated)

Public "edit image" misuse on X

As described by Reuters and The Verge, users could prompt Grok to alter photos of real people - often without the subject's consent - by asking the system to replace clothing with swimwear or other revealing outfits and then posting the results publicly.

Third-party analysis and press reports indicate the trend spread quickly: what began with some users creating sexualized content from their own images rapidly broadened into targeted abuse against strangers, public figures, and (in some documented instances) images that appeared to depict minors.

How visible was it?

Multiple outlets reported that Grok's public-facing outputs on X (including areas like its "media" presentation and reply behavior) became saturated with sexualized image edits during the peak of the controversy, prompting calls for immediate restrictions and enforcement.

The search workaround after the image tab was removed

When X removed the dedicated Grok image tab, the images that had already been posted as public replies did not disappear. Because they were standard media attached to ordinary posts, they continued to surface through regular search, media filters, and profile media grids. Users found that combining X's media filter (for example filter:images) with Grok-related terms or accounts effectively recreated a feed of Grok-generated images that were still public. This was less a technical bypass than a visibility loophole: a UI tab was removed, but the underlying posts and media remained searchable.

Search link (media results): https://x.com/search?q=from%3Agrok%20filter%3Amedia&src=recent_search_click&f=media

Evidence on Scale

While no single dataset can capture all activity (and removals can obscure historical totals), independent measurement provided a clearer sense of magnitude:

  • AI Forensics reported analyzing 20,000+ Grok-generated images (December 25, 2025-January 1, 2026) and found that, among images depicting people, a large share involved minimal attire, with a strong gender skew toward women and girls, and a small but material portion appearing to depict people under 18.
  • The same organization reported a separate 24-hour analysis (January 5-6, 2026) estimating creation rates in the thousands per hour for sexually suggestive or nudified outputs during peak periods.
  • Copyleaks and other reporting described how quickly clothing-removal prompts spread after new or expanded editing capabilities became widely used, and highlighted the difficulty of containing misuse once a public, high-velocity pattern emerges.
  • In an early snapshot, Reuters counted over 100 attempts in a 10-minute window to prompt clothing-removal edits, illustrating how routine the abuse had become during the surge.

Timeline: Key Dates (December 2025-January 2026)

Late December 2025: The trend becomes visible at scale

By the end of December, journalists and researchers documented widespread use of Grok prompts to create sexualized clothing edits of real people on X, including cases that appeared to involve minors.

January 2-4, 2026: International reporting and initial backlash

Reuters published details on how Grok was being used to generate and post large volumes of nonconsensual sexualized images, including cases involving children, prompting immediate political and regulatory attention in multiple jurisdictions.

January 6-8, 2026: EU and Ireland engage; monitoring intensifies

Ireland's media commission said it was engaging with the European Commission regarding concerns about explicit and sexualized images generated via Grok. On January 8, Reuters reported the European Commission ordered X to retain internal documents and data related to Grok until the end of 2026, in the context of ongoing scrutiny of unlawful content under the EU's regulatory framework.

January 9, 2026: Paywall restrictions on some X-native image functions

xAI/X limited some image generation and editing features on X to paying subscribers, after backlash over the system's use in producing sexualized images. Reporting and subsequent testing indicated that restricting access on X did not necessarily eliminate the underlying capability across all access points (including standalone experiences). For a broader look at how image-generation access and costs compare across providers, see our direct provider pricing breakdown for image + video generation APIs.

January 10-12, 2026: Platform blocks and formal UK investigation

Indonesia moved to block Grok access, citing risks of pornographic content. The UK regulator Ofcom announced a formal investigation into X, referencing concerns that Grok was being used to create and share sexualized images of children.

January 11-13, 2026: Malaysia restricts access and announces legal action

Malaysia restricted access to Grok and then announced legal action against X and xAI over alleged safety failures related to Grok's misuse.

January 2026: Philippines orders a block; Brazil consumer group escalates

Reuters reported the Philippines ordered access to Grok blocked, citing the risk of pornographic content. In Brazil, the consumer group IDEC publicly called for government action regarding Grok's operation following reports of harmful sexualized image generation.

January 15-17, 2026: First high-profile lawsuit; U.S. state action

Ashley St. Clair filed a lawsuit in New York against xAI alleging Grok enabled the creation and dissemination of sexually exploitative deepfake imagery involving her likeness. California's Attorney General announced action demanding xAI prevent the creation and distribution of nonconsensual sexualized imagery and deepfakes involving minors, escalating U.S. pressure.

What X/xAI Changed - and What Reporting Found Still Worked

Announced restrictions

Across January 2026, X and xAI announced multiple changes, including limiting some image features on X to paying users and implementing measures intended to prevent edits depicting real people in revealing clothing in certain jurisdictions.

Ongoing gaps and "patchwork" enforcement

Major outlets reported that restrictions were uneven across product surfaces:

  • The Washington Post reported that even after X said it had stopped certain sexualized edit behavior within X in some locations, similar functionality remained available via the standalone Grok app.
  • The Associated Press reported that despite claimed restrictions, testing indicated image editing that produced sexualized outputs could still be possible in at least some situations, underscoring enforcement and geoblocking limitations.

Taken together, these reports suggest that limiting one interface (for example, a reply workflow on X) did not fully address abuse when other access points to the model remained available.

Why This Happened (Based on Public Reporting)

Product design choices increased blast radius

A central factor identified by multiple reports was frictionless public deployment: enabling rapid, on-platform image edits in public reply threads, without requiring the subject's consent (and, in some reports, without clear notification to the original poster) made harassment scalable.

A permissive NSFW positioning existed elsewhere in the product line

Separate from the January 2026 controversy, xAI had previously launched Grok Imagine with a "Spicy" mode designed to allow NSFW content - an approach that drew criticism because it intentionally expanded the range of sexual content the product could generate, compared with peers that emphasize stricter constraints. European officials also referenced concerns arising after the introduction of paid "Spicy Mode" functionality in the prior year's product cycle when discussing sexualized imagery risks.

Legal and Regulatory Context (Non-exhaustive)

  • United Kingdom: Ofcom's investigation is grounded in UK online safety rules requiring platforms to assess and mitigate risks - especially for children - and remove illegal content. Reuters reported Prime Minister Keir Starmer called the content "disgusting" and "unlawful" while backing regulator action.
  • European Union: The European Commission ordered X to preserve Grok-related data and documents through the end of 2026, signaling ongoing scrutiny of unlawful content and platform compliance.
  • United States: Wired reported that Congress passed the TAKE IT DOWN Act in 2025, targeting nonconsensual intimate imagery (including deepfakes) and imposing obligations on platforms to respond to reports. Separate reporting noted the first high-profile Grok-related lawsuit was filed in New York in mid-January 2026.

Current Status (as of January 18, 2026)

Based on reporting through mid-January 2026:

  • X has imposed restrictions on certain Grok image-generation/editing features on the X platform, including limiting some capabilities to paying users and implementing location-based controls.
  • Regulatory and legal pressure continues to expand, with platform blocks, investigations, and legal action across multiple countries and at least one U.S. state.
  • Independent testing and reporting indicate the problem is not "fully fixed," particularly where the model remains accessible through alternative apps or web experiences.

Key Takeaways

  1. Public-by-default generation amplified harm. A high-volume, public reply workflow turned what might have been private misuse into a visible harassment engine.
  2. Partial restrictions can shift abuse rather than stop it. Limiting one interface does not eliminate abuse if parallel product surfaces retain similar capabilities.
  3. Regulators are treating AI image misuse as a mainstream platform-safety issue. The speed and breadth of investigations across the UK, EU, and Asia show that sexualized image misuse is increasingly being handled as an online safety and illegal-content problem - not merely a moderation dispute.

Sources

Primary and high-reliability reporting and documentation used in this article:


Last updated: January 18, 2026
Editorial note: This is an evolving situation. Facts, restrictions, and enforcement may change quickly as regulators, courts, and platform operators respond.

Next article

Direct Provider Pricing for Image + Video Generation APIs (USD)

Next

Explore the tools or browse interactive maps for more experiments.

Back to Blog Posts