Inbox placement is not a fire-and-forget metric. Provider filters change, your reputation drifts, your audience composition evolves, and a regression on any one of these can eat half your campaign revenue silently. The question is not whether to test — it is at what cadence, and which events should jolt you out of the routine schedule.
Routine cadence: daily if you send >100K/day, weekly at 10K-100K, monthly below that. Always test on changes (IP, DKIM key, From-domain, template). One failing provider is early warning — escalate cadence until you understand why.
Routine cadence by volume
- >100K/day: daily across all critical providers. At this volume a one-day blip is real revenue lost. The cost of testing is rounding error against the cost of an undetected drop.
- 10K-100K/day: weekly. Pick a fixed day, ideally before your highest-volume send of the week so any regression is visible in time to delay or fix.
- 1K-10K/day: bi-weekly to monthly. At this volume content changes more often than reputation shifts, so tying the test to template or campaign changes works better than calendar testing.
- <1K/day: monthly + on-change. Below this volume the per-provider sample is small enough that weekly testing is mostly noise.
Events that should always trigger a test
Regardless of routine cadence, certain changes warrant an out-of-band placement test before the next major send:
- New sending IP or change of IP pool. Reputation does not transfer; even a known-good ESP IP can be different from your prior IP at provider X.
- DKIM key rotation or new DKIM domain. Verify the new key is published and selected before sending.
- From-domain change (including subdomain shifts). New From-domain has zero history at every provider on day one.
- SPF or DMARC policy change. Tightening to
p=quarantineorp=rejectcan surface previously-tolerated alignment gaps as Spam placements. - Template overhaul. Major HTML structure change can cross a content-classifier threshold. Test once before broadcasting at full volume.
- Switching ESP or adding a new sending stream. New transactional flow, new RPA flow, new ESP — all change the IP/auth/template surface enough to need a fresh read.
- Recovery from a complaint spike or list hygiene event. Re-test after warmup to confirm reputation recovered.
Treat one-provider failures as canaries
Single-provider regressions are rarely isolated. A drop at Mail.ru tends to precede a drop at Yandex by days; a drop at T-Online often precedes drops at GMX. The filter operators share neither code nor reputation data, but they often react to the same underlying cause (a complaint spike, a content change, an IP-level signal).
- One-provider Spam: escalate test cadence to daily for the next 1-2 weeks across all providers.
- Two-provider Spam: stop scaling. Diagnose before sending more volume.
- Three+ provider Spam: systemic problem. Pause new acquisition sends; focus on engaged-only mail while you diagnose and recover.
What to actually look at in each test
- Folder placement per provider (Inbox / Spam / Promotions / Other / Junk). The headline number.
- Authentication-Results headers per provider. Catch alignment regressions before they show up in placement.
- Postmaster reputation where available (Gmail, Mail.ru, Yandex, SNDS for Microsoft IPs).
- Per-provider trend across at least the last 3-4 tests. A single test is a snapshot; the trend is the actionable signal.
Cost vs. benefit
A free placement test takes 60 seconds and produces actionable per-provider placement plus parsed authentication results. Even at high volume, the cost is operational time — far less than the cost of a campaign that lands in Spam at one provider unnoticed.
For senders who want a fixed-cost, no-friction routine: send through our free inbox placement test on the same day each week, save the report URL, and review the trend. The same nine seed mailboxes every time gives you a comparable signal across runs.