Skip to main content
Meta logo

Meta Enforcement Timeline

Every account suspended, every ad cut, every post removed by Meta on Facebook and Instagram in the last 7 days — broken down by the exact rule that triggered the action. The table below is what most advertisers and creators don't see until it's their account.

Updated daily
18,418,020actions in last 7 days
2platforms (FB + IG)

18,418,020 Meta actions in total

Category
Sat
Apr 25
Fri
Apr 24
Thu
Apr 23
Wed
Apr 22
Tue
Apr 21
Mon
Apr 20
Sun
Apr 19
Community guideline violations
16,523,695 total
2,501,990
1,773,509
2,396,065
3,416,965
2,831,862
2,172,215
1,431,089
Scams and/or fraud
1,003,033 total
130,740
128,362
160,851
188,827
190,877
102,303
101,073
Negative effects on civic discourse or elections
195,759 total
33,085
32,100
48,603
50,273
19,874
5,484
6,340
Data protection and privacy violations
170,261 total
13,474
30,290
30,411
30,378
21,754
23,585
20,369
Unsafe, non-compliant or prohibited products
133,025 total
18,205
22,625
27,386
20,202
17,931
18,068
8,608
Intellectual property infringements
110,492 total
10,202
14,527
17,222
21,848
18,267
16,711
11,715
Violence
70,626 total
9,076
9,247
12,847
16,472
9,587
6,057
7,340
Protection of minors
68,832 total
11,230
6,262
12,869
12,258
11,113
8,212
6,888
Risk for public security
50,605 total
4,771
7,792
8,145
12,824
9,323
4,265
3,485
Illegal or harmful speech
39,572 total
5,212
5,368
6,110
9,244
5,841
3,609
4,188
Cyber violence
27,414 total
2,511
2,979
3,794
7,157
5,301
2,521
3,151
Consumer information infringements
15,382 total
2,224
2,725
2,236
2,724
2,025
1,844
1,604
Self-harm
9,324 total
1,078
1,249
1,417
2,310
1,584
869
817
Animal welfare
0 total
Cyber violence against women
0 total
Unspecified notices
0 total
Daily total2,743,7982,037,0352,727,9563,791,4823,145,3392,365,7431,606,667
Heat scale:lowmidhigh· per row, relative to that category's max, source: EU DSA Transparency Database (CC BY 4.0)
Read this

Most Meta accounts find out too late.

Three patterns the operators who keep their Meta accounts watch for — and how to read this page like one of them.

01

Why most advertisers find out too late

The accounts in the table above were running ads or building audiences yesterday. Today they got suspended, removed, or cut. The reason is almost never that they violated an obvious rule — it's that Meta's interpretation of an existing rule shifted, and they had no way to see it coming. Meta updates Community Standards and Ad Standards on a near-weekly basis. Most updates are quiet edits to a single sentence inside a policy article. Those quiet edits are exactly what produce the enforcement spikes you see in this matrix 3–10 days later.

02

What this page lets you do that almost nobody else does

Every category in the table links to the actual Meta rule that produced the action — directly to the Community Standard or Ad Standard, not to a generic policy hub. So when 'Scams and/or fraud' lights up red on a given day, you can read the exact rule Meta is currently enforcing, compare it to your own ad creative or landing page, and decide whether to pause, edit, or escalate. This is how operators and brand-safety teams actually use this data: as a leading indicator that lets them get ahead of suspensions, not a post-mortem after they happen.

03

How to read the heatmap like a pro

The heat scale is per-row — a dark cell in 'Hateful conduct' is dark relative to that category's own 7-day max, not to absolute volume. This makes trend changes visible even for categories that would otherwise be drowned out by the giant 'Other violation' bucket. Watch for two patterns: (1) a category darkening day-over-day with no policy banner above, which usually signals a silent classifier retraining; (2) a policy banner followed by a darkening category 3–10 days later, which is the strongest possible signal that a paper rule has become a live enforcement priority. Both are reasons to audit your own setup before you become a number in tomorrow's table.

The rules they got banned for

Every action above stems from one of these Meta rules.

Not knowing what changed in these rules is what got the accounts in the table suspended, demonetized, or removed. Read the rule, or get alerted the moment Meta updates it — your call.

Community guideline violations
90% · 16,523,695
Negative effects on civic discourse or elections
1% · 195,759
Data protection and privacy violations
1% · 170,261
Unsafe, non-compliant or prohibited products
1% · 133,025
Intellectual property infringements
1% · 110,492

Frequently asked questions

Why does this Meta enforcement page matter for my business?
Because the accounts in the table didn't realize Meta was about to act on them. They were running ads, posting content, building audiences — and then woke up to a suspension or removal because a policy interpretation shifted. This page shows you, every day, which categories Meta is actively enforcing — so you can audit your own setup against the rules that are actually being applied right now, not the ones that look stale on the policy page.
What's the practical use of seeing daily category breakdowns?
Trend signal. If 'Scams and/or fraud' enforcement spikes 3 days in a row, Meta has almost certainly tightened its automated classifier — and finance, crypto, supplement, and dropshipping advertisers will be the first to feel it. Spotting that pattern early gives you 48–72 hours to review your own creative, copy, and landing pages before the wave reaches your accounts.
How do Meta policy changes connect to enforcement spikes?
Meta updates its Community Standards and Ad Standards constantly — most of it never makes mainstream news. We track every diff. Enforcement in a category typically rises 3–10 days after a policy update touches that area. When you see a policy banner above the table followed by a darkening row in the matching category, that's the signature of a real shift — not a paper change.
Why is Meta's enforcement volume so much higher than other platforms?
Two of the largest social platforms (Facebook + Instagram) with billions of EU users, plus the most aggressive automated moderation in the industry. The majority of daily actions are spam removals and 'community guideline violations' — but the categories that matter to advertisers (scams, restricted goods, protection of minors) are where Meta tightens fastest, and where most advertisers get caught off guard.
Is Facebook or Instagram more heavily enforced?
Facebook has higher absolute volume because the user base is larger, but Instagram has a higher rate of account-level actions in categories like cyber violence and protection of minors. We merge both into 'Meta' on this page because for most advertisers and creators, an action on either platform hits the same business unit.
Can I get alerted before this hits my account?
Yes. Free plan: you can read everything on this page. Pro plan: you get email alerts the moment Meta updates a policy in your sector, plus anomaly alerts when enforcement in your relevant categories spikes above baseline. Most Pro users joined after losing an account they wish they'd seen coming.
Where does this enforcement data actually come from?
Meta is required by law to publish every content moderation decision to a public European database, which we ingest daily. The numbers in the table are Meta's own reported actions — we don't estimate, we don't sample. Source attribution and licensing details are at the bottom of this page.

These accounts didn't see it coming. You don't have to be next.

Every removed post, suspended account, and cut ad above is a brand or creator who missed a Meta policy update. Get the diff the moment Meta touches a Community Standard or Ad Standard — so you adjust before enforcement reaches you.

Create free account