Skip to main content
X logo

X Enforcement Timeline

Every moderation action X reports under the EU Digital Services Act — grouped by violation type and linked to the specific X Rule that governs each enforcement bucket. X's enforcement profile differs sharply from other platforms; this timeline shows how.

Updated daily
95,134actions in last 7 days
1platform

95,134 X actions in total

Category
Sat
Apr 25
Fri
Apr 24
Thu
Apr 23
Wed
Apr 22
Tue
Apr 21
Mon
Apr 20
Sun
Apr 19
Scams and/or fraud
82,271 total
9,586
10,094
10,826
13,541
12,988
13,452
11,784
Protection of minors
6,102 total
902
1,048
972
1,021
883
760
516
Unsafe, non-compliant or prohibited products
3,552 total
1,698
512
252
322
328
246
194
Self-harm
960 total
148
106
143
130
119
215
99
Intellectual property infringements
788 total
66
79
87
121
162
114
159
Cyber violence
643 total
101
33
89
100
132
120
68
Violence
541 total
118
77
76
57
71
69
73
Community guideline violations
250 total
27
39
59
48
24
34
19
Data protection and privacy violations
16 total
3
3
2
2
3
3
Illegal or harmful speech
7 total
1
3
1
1
1
Animal welfare
4 total
2
2
Consumer information infringements
0 total
Cyber violence against women
0 total
Negative effects on civic discourse or elections
0 total
Risk for public security
0 total
Unspecified notices
0 total
Daily total12,65011,99112,50915,34314,70915,01612,916
Heat scale:lowmidhigh· per row, relative to that category's max, source: EU DSA Transparency Database (CC BY 4.0)
Read this

Most X accounts find out too late.

Three patterns the operators who keep their X accounts watch for — and how to read this page like one of them.

01

How X's enforcement profile differs from other platforms

X's content moderation strategy has shifted toward visibility-based enforcement: rather than removing borderline content outright, the platform demotes, labels, or restricts interaction with it. As a result, removal counts in X's DSA report tend to be lower than Meta's or TikTok's, but visibility actions are higher. The categories that dominate — manipulation and spam, hateful conduct, financial scams — track X's published Rules. Each row in the matrix above can be cross-referenced with the specific Rule via the sidebar, so you can see exactly which X Rule a given enforcement bucket corresponds to.

02

What advertisers and account holders should know

X enforces ads under a separate ad-policy regime that is not surfaced in DSA reports — but content moderation outcomes still matter for advertisers because organic content from brand handles falls under the same Rules as any user post. For accounts running campaigns in regulated verticals (crypto, finance, supplements), the 'Financial Scam' category is the leading indicator: when X tightens its scam-detection model, brand-handle posts are often the first false-positives. The 'Hateful conduct' and 'Violent speech' categories are also worth tracking for editorial-driven brands or media companies whose posts may quote controversial material verbatim.

03

Reading the matrix in the context of X's policy churn

X has published a relatively high volume of Rule changes since 2023 — more than Meta, less than TikTok. Our scanner captures these changes and surfaces them as policy banners above the matrix. Because X's enforcement strategy is more visibility-based, the lag between a policy update and observable enforcement can be shorter than on platforms that primarily remove content; demotion actions are easier to scale than removal review queues. The heat scale is per-row, normalized to that category's 7-day max, so visibility-heavy categories don't drown out the lower-volume buckets.

The rules they got banned for

Every action above stems from one of these X rules.

Not knowing what changed in these rules is what got the accounts in the table suspended, demonetized, or removed. Read the rule, or get alerted the moment X updates it — your call.

Protection of minors
6% · 6,102
Unsafe, non-compliant or prohibited products
4% · 3,552
  • X Rules
    Platform-wide policy reference.
Self-harm
1% · 960
  • X Rules
    Platform-wide policy reference.
Intellectual property infringements
1% · 788
Cyber violence
1% · 643
Community guideline violations
0% · 250
Data protection and privacy violations
0% · 16
  • X Rules
    Platform-wide policy reference.

Frequently asked questions

Where does this X enforcement data come from?
Every action is sourced from the European Commission's DSA Transparency Database. X submits each moderation decision — post removals, account suspensions, sensitive media labels — under Article 24(5) of the Digital Services Act. We aggregate their daily submissions under CC BY 4.0.
How do enforcement categories map to X's actual Rules?
DSA defines 16 enforcement categories. We surface the most relevant X Rule (Hateful Conduct, Violent Speech, Financial Scam, Manipulation and Spam, Sensitive Media, IP Policy, etc.) for each category in the sidebar. So if you see a spike in 'Illegal or harmful speech' enforcement, the link takes you straight to X's Hateful Conduct Policy.
Why does X's enforcement profile look different from other platforms?
X has publicly shifted its content moderation approach toward less proactive removal and more visibility-based enforcement (labels, demotion, restricted reach). You'll often see lower removal counts than other platforms but higher 'visibility' actions. This is intentional product strategy, not a data gap.
Are content moderation actions the same as ad-account enforcement?
No. The DSA database covers content moderation decisions on user posts and accounts. Ad-account enforcement (campaign rejections, ad rule violations) follows X's separate Ad Policies and is not surfaced here. Use our Policy Tracker to monitor both content rules and ad policies.
How often is this timeline updated?
New entries are added every morning after our ingestion cron pulls yesterday's data from the DSA Transparency API. Expect each day's snapshot to appear roughly 12–18 hours after the calendar day ends.
Can I get alerted when X enforcement spikes in my category?
Yes — our Pro plan includes anomaly alerts that notify you by email when enforcement in a specific category (e.g., Financial Scam for Crypto/Finance brands) spikes significantly above the baseline.

Track X's content rules — before they hit your posts or campaigns.

X's content moderation has shifted aggressively in the past 18 months. Get alerted the moment X updates a content rule or ad policy, so you can adjust your editorial or creative approach before enforcement lands.

Create free account