Every moderation action X reports under the EU Digital Services Act — grouped by violation type and linked to the specific X Rule that governs each enforcement bucket. X's enforcement profile differs sharply from other platforms; this timeline shows how.
Three patterns the operators who keep their X accounts watch for — and how to read this page like one of them.
01
How X's enforcement profile differs from other platforms
X's content moderation strategy has shifted toward visibility-based enforcement: rather than removing borderline content outright, the platform demotes, labels, or restricts interaction with it. As a result, removal counts in X's DSA report tend to be lower than Meta's or TikTok's, but visibility actions are higher. The categories that dominate — manipulation and spam, hateful conduct, financial scams — track X's published Rules. Each row in the matrix above can be cross-referenced with the specific Rule via the sidebar, so you can see exactly which X Rule a given enforcement bucket corresponds to.
02
What advertisers and account holders should know
X enforces ads under a separate ad-policy regime that is not surfaced in DSA reports — but content moderation outcomes still matter for advertisers because organic content from brand handles falls under the same Rules as any user post. For accounts running campaigns in regulated verticals (crypto, finance, supplements), the 'Financial Scam' category is the leading indicator: when X tightens its scam-detection model, brand-handle posts are often the first false-positives. The 'Hateful conduct' and 'Violent speech' categories are also worth tracking for editorial-driven brands or media companies whose posts may quote controversial material verbatim.
03
Reading the matrix in the context of X's policy churn
X has published a relatively high volume of Rule changes since 2023 — more than Meta, less than TikTok. Our scanner captures these changes and surfaces them as policy banners above the matrix. Because X's enforcement strategy is more visibility-based, the lag between a policy update and observable enforcement can be shorter than on platforms that primarily remove content; demotion actions are easier to scale than removal review queues. The heat scale is per-row, normalized to that category's 7-day max, so visibility-heavy categories don't drown out the lower-volume buckets.
The rules they got banned for
Every action above stems from one of these X rules.
Not knowing what changed in these rules is what got the accounts in the table suspended, demonetized, or removed. Read the rule, or get alerted the moment X updates it — your call.
Every action is sourced from the European Commission's DSA Transparency Database. X submits each moderation decision — post removals, account suspensions, sensitive media labels — under Article 24(5) of the Digital Services Act. We aggregate their daily submissions under CC BY 4.0.
How do enforcement categories map to X's actual Rules?
DSA defines 16 enforcement categories. We surface the most relevant X Rule (Hateful Conduct, Violent Speech, Financial Scam, Manipulation and Spam, Sensitive Media, IP Policy, etc.) for each category in the sidebar. So if you see a spike in 'Illegal or harmful speech' enforcement, the link takes you straight to X's Hateful Conduct Policy.
Why does X's enforcement profile look different from other platforms?
X has publicly shifted its content moderation approach toward less proactive removal and more visibility-based enforcement (labels, demotion, restricted reach). You'll often see lower removal counts than other platforms but higher 'visibility' actions. This is intentional product strategy, not a data gap.
Are content moderation actions the same as ad-account enforcement?
No. The DSA database covers content moderation decisions on user posts and accounts. Ad-account enforcement (campaign rejections, ad rule violations) follows X's separate Ad Policies and is not surfaced here. Use our Policy Tracker to monitor both content rules and ad policies.
How often is this timeline updated?
New entries are added every morning after our ingestion cron pulls yesterday's data from the DSA Transparency API. Expect each day's snapshot to appear roughly 12–18 hours after the calendar day ends.
Can I get alerted when X enforcement spikes in my category?
Yes — our Pro plan includes anomaly alerts that notify you by email when enforcement in a specific category (e.g., Financial Scam for Crypto/Finance brands) spikes significantly above the baseline.
Track X's content rules — before they hit your posts or campaigns.
X's content moderation has shifted aggressively in the past 18 months. Get alerted the moment X updates a content rule or ad policy, so you can adjust your editorial or creative approach before enforcement lands.