Every account suspended, every ad cut, every post removed by Meta on Facebook and Instagram in the last 7 days — broken down by the exact rule that triggered the action. The table below is what most advertisers and creators don't see until it's their account.
Three patterns the operators who keep their Meta accounts watch for — and how to read this page like one of them.
01
Why most advertisers find out too late
The accounts in the table above were running ads or building audiences yesterday. Today they got suspended, removed, or cut. The reason is almost never that they violated an obvious rule — it's that Meta's interpretation of an existing rule shifted, and they had no way to see it coming. Meta updates Community Standards and Ad Standards on a near-weekly basis. Most updates are quiet edits to a single sentence inside a policy article. Those quiet edits are exactly what produce the enforcement spikes you see in this matrix 3–10 days later.
02
What this page lets you do that almost nobody else does
Every category in the table links to the actual Meta rule that produced the action — directly to the Community Standard or Ad Standard, not to a generic policy hub. So when 'Scams and/or fraud' lights up red on a given day, you can read the exact rule Meta is currently enforcing, compare it to your own ad creative or landing page, and decide whether to pause, edit, or escalate. This is how operators and brand-safety teams actually use this data: as a leading indicator that lets them get ahead of suspensions, not a post-mortem after they happen.
03
How to read the heatmap like a pro
The heat scale is per-row — a dark cell in 'Hateful conduct' is dark relative to that category's own 7-day max, not to absolute volume. This makes trend changes visible even for categories that would otherwise be drowned out by the giant 'Other violation' bucket. Watch for two patterns: (1) a category darkening day-over-day with no policy banner above, which usually signals a silent classifier retraining; (2) a policy banner followed by a darkening category 3–10 days later, which is the strongest possible signal that a paper rule has become a live enforcement priority. Both are reasons to audit your own setup before you become a number in tomorrow's table.
The rules they got banned for
Every action above stems from one of these Meta rules.
Not knowing what changed in these rules is what got the accounts in the table suspended, demonetized, or removed. Read the rule, or get alerted the moment Meta updates it — your call.
Why does this Meta enforcement page matter for my business?
Because the accounts in the table didn't realize Meta was about to act on them. They were running ads, posting content, building audiences — and then woke up to a suspension or removal because a policy interpretation shifted. This page shows you, every day, which categories Meta is actively enforcing — so you can audit your own setup against the rules that are actually being applied right now, not the ones that look stale on the policy page.
What's the practical use of seeing daily category breakdowns?
Trend signal. If 'Scams and/or fraud' enforcement spikes 3 days in a row, Meta has almost certainly tightened its automated classifier — and finance, crypto, supplement, and dropshipping advertisers will be the first to feel it. Spotting that pattern early gives you 48–72 hours to review your own creative, copy, and landing pages before the wave reaches your accounts.
How do Meta policy changes connect to enforcement spikes?
Meta updates its Community Standards and Ad Standards constantly — most of it never makes mainstream news. We track every diff. Enforcement in a category typically rises 3–10 days after a policy update touches that area. When you see a policy banner above the table followed by a darkening row in the matching category, that's the signature of a real shift — not a paper change.
Why is Meta's enforcement volume so much higher than other platforms?
Two of the largest social platforms (Facebook + Instagram) with billions of EU users, plus the most aggressive automated moderation in the industry. The majority of daily actions are spam removals and 'community guideline violations' — but the categories that matter to advertisers (scams, restricted goods, protection of minors) are where Meta tightens fastest, and where most advertisers get caught off guard.
Is Facebook or Instagram more heavily enforced?
Facebook has higher absolute volume because the user base is larger, but Instagram has a higher rate of account-level actions in categories like cyber violence and protection of minors. We merge both into 'Meta' on this page because for most advertisers and creators, an action on either platform hits the same business unit.
Can I get alerted before this hits my account?
Yes. Free plan: you can read everything on this page. Pro plan: you get email alerts the moment Meta updates a policy in your sector, plus anomaly alerts when enforcement in your relevant categories spikes above baseline. Most Pro users joined after losing an account they wish they'd seen coming.
Where does this enforcement data actually come from?
Meta is required by law to publish every content moderation decision to a public European database, which we ingest daily. The numbers in the table are Meta's own reported actions — we don't estimate, we don't sample. Source attribution and licensing details are at the bottom of this page.
These accounts didn't see it coming. You don't have to be next.
Every removed post, suspended account, and cut ad above is a brand or creator who missed a Meta policy update. Get the diff the moment Meta touches a Community Standard or Ad Standard — so you adjust before enforcement reaches you.