X Sensitive Media Settings 2026: Auto-Detection, Flags & Brand Safety
X's sensitive media flag now covers violence, gore, and suggestive content beyond NSFW — with auto-detection, advertiser opt-outs, and an EU DSA default that flips for minors.
What X's 2026 Sensitive Media Policy Covers
X's 2026 sensitive media framework is the platform's general-purpose content classification layer for material that does not meet the adult content threshold but exceeds what the platform considers safe for default display. The framework sits alongside — and is operationally distinct from — the Adult Content Creator program, which governs nudity and sexual content under a separate set of eligibility rules and tier classifications. Conflating the two leads to compliance errors on both sides, which is why this guide treats them as separate workflows.
The sensitive media policy applies to any account posting content that falls inside one of five category families. The same flag drives the same downstream behaviours regardless of which family triggered it — the content is hidden by default for viewers with sensitive media off, advertiser inventory is excluded from adjacency, and a small set of recommender system constraints apply. The simplicity of the downstream rules belies the complexity of the classification, which depends on a hybrid auto-detection model that has changed significantly through 2025 and 2026.
"Sensitive media is the layer where the largest share of policy disputes resolve, because the line between sensitive and not-sensitive is more contested than the line between adult content and not-adult-content."
— X Trust & Safety, April 2026 transparency report
For advertisers, the practical compliance question is not whether sensitive media exists on the platform — it does, in volume — but whether their ad inventory can end up adjacent to it, and what controls they need to configure to bring the adjacency rate down to a defensible level for their brand safety policy. For creators, the practical question is whether their content will be flagged, what their options are if it is, and how to navigate the appeal process when the classification is wrong.
Sensitive Media Categories: What Triggers a Flag
The five category families that X classifies as sensitive media each have distinct trigger criteria, distinct false-positive profiles, and distinct downstream consequences for advertiser adjacency. Understanding the categories matters because the appeal process and the educational-context exception apply differently to each.
| Category | Primary Trigger | Common False Positive | Educational Exception? |
|---|---|---|---|
| Graphic Violence | Injury, death, weapons in active use, aftermath of violent events | Historical or news imagery without explicit gore | News, journalism, conflict documentation |
| Hyper-Realistic Gore | Wound photography, forensic imagery, surgical content out of context | Medical education imagery, harm reduction visuals | Medical, public health, harm reduction |
| Suggestive Content | Partial nudity in artistic context, intimate imagery without explicit content | Fitness content, fashion editorial, wellness imagery | Limited — narrowly drawn |
| Medical Content | Clinical procedures, severe pathology, emergency medicine documentation | Healthcare professional education | Yes — verified healthcare context |
| Legacy Adult (now under ACC) | Nudity, sexual content above suggestive threshold | Misclassified suggestive content escalated incorrectly | None — moved to ACC framework |
The legacy adult category is operationally separate from the other four. Content that meets the adult threshold must be published from an Adult Content Creator account and is subject to the ACC tier classification, not the standard sensitive media flag. The cross-traffic between the categories happens primarily at the suggestive-to-adult boundary, where the classifier sometimes escalates suggestive content into the adult bucket — a misclassification that the appeals process is configured to correct.
Joint Text-Image Classification
The 2026 model treats text and image jointly. A clinical image accompanied by clinical text typically clears the medical category. The same image accompanied by sensationalising or suggestive text gets escalated. The reverse direction also operates — a benign image accompanied by suggestive text can be classified as suggestive content even when the image alone would not trigger a flag. Creators in regulated categories should treat the post text as part of the classification surface, not as a separate channel.
Auto-Detection vs Manual Flag
X operates a two-track classification pipeline. The auto-detection track runs on every upload and applies the model classification before publication. The manual flag track is creator-controlled and allows a creator to apply a sensitive flag voluntarily, which routes the content through the educational-context exception logic and reduces the rate at which the post is subsequently restricted by auto-detection.
Auto-Detection Pipeline
Stage one is content recognition. The platform's image and video classification model evaluates the visual content against the five sensitive media families and outputs a per-category confidence score. The model is retrained on a quarterly cycle and incorporates feedback from the appeal process and from the platform's trust and safety reviewers.
Stage two is context fusion. The model takes the post text, the embedded link target if any, the conversation thread, and the account's prior behaviour into account, adjusting the per-category score. A creator with a clean history posting in a clearly educational context will see lower friction than a new account posting similar content.
Stage three is policy application. The final classification is mapped to the sensitive media flag, the default visibility rule, and the advertiser adjacency status. The post is either published with the flag attached or held for manual review if the confidence is in a contested band.
Manual Flag Pre-Emption
Creators can apply a sensitive flag at publication time through the compose interface. The manual flag does not bypass the auto-detection pass, but it does change two things. First, the platform interprets the manual flag as a context signal that informs the educational-context exception logic for medical, harm reduction, journalistic, and conflict documentation content. Second, the rate of post-publication retroactive reclassification is materially lower for manually-flagged content because the creator has already accepted the classification and the model treats the post as settled.
False-Positive Profile
The April 2026 transparency report disclosed an aggregate false-positive rate around 4 to 6 percent across the five families combined. The medical content category has the highest false-positive rate, primarily because clinical imagery is visually similar to gore and the educational-context exception requires manual specialist review to invoke. Healthcare professionals, harm reduction educators, and forensic researchers face the highest friction and benefit most from pre-emptive manual flag application.
User Settings: Sensitive Media Display Controls
The viewer-side controls have shifted in 2026. The default state for new accounts now depends on jurisdiction and confirmed age, and the override controls have narrowed for accounts the platform has confirmed or inferred to be under 18.
Default State by Jurisdiction
- EU and UK accounts: 'Hide sensitive media' is the default for all new accounts. The default reflects the DSA recommender system obligation under Article 34, which requires platforms to mitigate systemic risks to minors and adult users.
- US and other accounts: 'Display sensitive media with warning' remains the default, with a one-tap reveal mechanism for individual posts.
- Minor-confirmed accounts (all jurisdictions): 'Hide sensitive media' is mandatory and cannot be toggled off. The minor-confirmed status combines the age stated at signup with secondary inference signals from account behaviour.
Override and Reveal Mechanisms
Adult users who have toggled sensitive media display on still see the warning overlay on individual posts and must tap to reveal. The reveal action is logged and counts toward the platform's user-level engagement signals for the recommender system. Users can also configure category-level visibility — for example, choosing to display medical content but hide graphic violence — through the granular sensitive media controls in account settings.
Advertiser Brand Safety & Ad Adjacency
Advertiser controls for sensitive media adjacency operate at three layers — the platform-wide default, the category-level granular configuration, and the third-party brand safety integration. Each layer addresses a different brand safety question and produces different reporting outputs.
Platform-Wide Default Opt-Out
The platform-wide opt-out is on by default for every advertiser account and prevents standard ad inventory from appearing adjacent to any content flagged under any of the five sensitive media families. The opt-out applies at the campaign level and is visible in the advertiser dashboard under brand safety controls. Advertisers running in standard inventory should treat the default opt-out as the compliance floor.
Category-Level Granular Opt-In
Advertisers in specific categories may want to allow adjacency to certain sensitive media families — harm reduction nonprofits advertising alongside harm reduction educational content, news advertisers appearing alongside conflict documentation, regulated medical advertisers appearing alongside clinical content. The category-level granular opt-in supports this with per-category toggles, subject to a creative review pass before activation. The granular opt-in is not available for the legacy adult category, which is governed by the ACC Adjacency Shield separately.
Third-Party Brand Safety Integration
X exposes its sensitive media classification through DoubleVerify, Integral Ad Science, and the platform's native brand safety partners. The integration provides post-campaign reporting on adjacency events and supports publisher account exclusion lists for advertisers that want granular control over which specific accounts their inventory can appear alongside. Brand safety reporting includes per-category adjacency rates and supports audit-trail evidence for advertiser internal compliance reviews.
Industry-Specific Considerations
Healthcare advertisers should configure granular opt-in for medical content adjacency where the campaign objective supports it, while excluding suggestive content adjacency entirely. Financial services advertisers should exclude all sensitive categories and rely on the default opt-out plus an exclusion list. News and current affairs advertisers should configure granular opt-in for graphic violence in journalistic context while excluding suggestive content. Children's product advertisers should treat the default opt-out as the floor and additionally apply minor-protection controls through the platform's audience configuration. For consolidated industry context, see Healthcare Social Media Compliance.
EU DSA Defaults & Minor Protection
The EU Digital Services Act has reshaped the default user experience for EU and UK accounts and has produced specific obligations for X under Article 28 (advertising to minors), Article 34 (systemic risk assessment), and Article 35 (mitigation measures). The 2026 sensitive media defaults reflect those obligations and continue to evolve as the European Board for Digital Services issues interpretive guidance.
Article 28 — Advertising to Minors
Behavioural advertising based on profiling cannot be served to accounts the platform has confirmed or strongly inferred to be under 18. The prohibition applies regardless of the sensitive media setting and is operationalised through the platform's audience configuration. Advertisers cannot opt out of this rule, and platform-level enforcement removes minor-inferred accounts from behavioural audiences automatically.
Article 34 — Systemic Risk Assessment
X is required to assess the systemic risks that sensitive media surfaces produce for users in the EU and to document the mitigation measures in place. The 2026 risk assessment summary, published under the Article 9 transparency obligation, identifies the auto-detection coverage gaps in the medical and suggestive categories as the residual risk areas requiring ongoing model improvement.
Article 35 — Mitigation Measures
Mitigation measures that the platform has implemented include the default 'hide sensitive media' setting for new EU accounts, the mandatory hide setting for minor-confirmed accounts, the recommender system constraints on sensitive media amplification, and the brand safety reporting infrastructure that supports advertiser oversight.
Compliance Posture for EU-Operating Brands
EU-operating brands should treat the DSA framework as the regulatory floor rather than the ceiling. Specifically: configure the default opt-out as the campaign baseline, apply category-level granular controls where the campaign profile requires sensitive content adjacency, exclude minor-inferred audiences from any campaign that touches sensitive media adjacency even in granular opt-in, document the brand safety configuration in the internal compliance record, and review the configuration quarterly against the platform's updated transparency reports.
Appeals Process & Reversal Timelines
X's appeal process operates on a tiered timeline that varies by appeal type, account history, and category. The 2026 service levels are tighter than the 2024 baseline and the reversal rates have improved as the auto-detection model has been retrained on appeal-derived data.
| Appeal Type | Typical Timeline | Reversal Rate (Q1 2026) |
|---|---|---|
| Standard (single post, clean account) | 24 to 72 hours | ~22% |
| Expanded (multiple posts or account-level restriction) | 5 to 10 business days | ~18% |
| Edge case (medical / educational context) | Up to 21 business days | ~35% |
| Advertiser brand safety dispute | 5 business days | ~12% |
Reducing Appeal Friction
- Pre-emptive manual flag: Apply the sensitive flag at publication time for any content that sits in a contested category. The flag does not bypass auto-detection but it does reduce retroactive reclassification.
- Context statement: Include a brief educational, journalistic, or harm-reduction context statement in the post text. The context fusion stage of the model reads the post text as a classification signal.
- Account history hygiene: Clean account history reduces the friction multiplier on every classification decision. Accounts with prior violations face stricter thresholds and longer appeal timelines.
- Verified context: For medical and harm reduction creators, verified professional context through account verification or organisational affiliation invokes the educational-context exception more reliably.
Compliance Checklist
- [ ] Reviewed which of the five sensitive media families could apply to your content or campaigns
- [ ] Configured advertiser default opt-out as the campaign baseline
- [ ] Applied category-level granular opt-in only where campaign profile requires it
- [ ] Excluded minor-inferred audiences from any campaign touching sensitive media adjacency
- [ ] Configured third-party brand safety integration for adjacency reporting
- [ ] Documented brand safety configuration in internal compliance record
- [ ] For EU operation: aligned defaults with DSA Article 34 systemic risk framework
- [ ] Creator-side: trained content team on pre-emptive manual flag for contested categories
- [ ] Creator-side: added educational, journalistic, or harm-reduction context statements where applicable
- [ ] Healthcare creators: secured verified professional context for educational exception
- [ ] Established appeal escalation process for false-positive flag decisions
- [ ] Reviewed quarterly transparency reports for adjacency rate trends
- [ ] Cross-referenced this workflow with adult content creator program where ACC eligibility applies
- [ ] Updated record of processing activities for the sensitive media classification surface
For complementary creator-side compliance reference, see X Adult Content Policy 2026 Rules Guide and for ongoing policy change tracking, see Policy Tracker.
Don't miss the next policy change.
Subscribe to the Policy Tracker — get weekly digests or instant Pro alerts across all 8 platforms. Or try our free Keyword Risk Checker first.
Report Keywords — Run AI Compliance Audit
Related Posts
X Age Verification Requirements 2026: Process, Issues & Appeals Guide
X's 2026 age verification has expanded across adult content, sensitive media, EU recommender systems, and advertiser audience configuration. Here's the full process.
X Community Notes on Paid Ads Enforcement 2026 — Advertiser Response Framework, Context Labels & Brand Safety Implications
X extended Community Notes to paid ads on April 19, 2026 allowing contributors to add context labels to sponsored posts, reshaping advertiser response workflows and brand safety expectations.
X (Twitter) Hateful Conduct & Content Moderation Policy 2026 — Rules, Enforcement Changes & Advertiser Impact
X's hateful conduct policy has undergone significant changes in 2026. This compliance guide covers current rules, enforcement tiers, advertiser brand safety tools, and actionable steps to protect campaigns on the platform.