Operations6 min read

How often should you test inbox placement?

Senders ask this constantly and the honest answer is “more than you do, less than every send.” The right cadence is set by your volume, audience criticality, and how often the things that affect placement change. Here is a working framework.

Inbox placement is not a fire-and-forget metric. Provider filters change, your reputation drifts, your audience composition evolves, and a regression on any one of these can eat half your campaign revenue silently. The question is not whether to test — it is at what cadence, and which events should jolt you out of the routine schedule.

TL;DR

Routine cadence: daily if you send >100K/day, weekly at 10K-100K, monthly below that. Always test on changes (IP, DKIM key, From-domain, template). One failing provider is early warning — escalate cadence until you understand why.

Routine cadence by volume

  • >100K/day: daily across all critical providers. At this volume a one-day blip is real revenue lost. The cost of testing is rounding error against the cost of an undetected drop.
  • 10K-100K/day: weekly. Pick a fixed day, ideally before your highest-volume send of the week so any regression is visible in time to delay or fix.
  • 1K-10K/day: bi-weekly to monthly. At this volume content changes more often than reputation shifts, so tying the test to template or campaign changes works better than calendar testing.
  • <1K/day: monthly + on-change. Below this volume the per-provider sample is small enough that weekly testing is mostly noise.

Events that should always trigger a test

Regardless of routine cadence, certain changes warrant an out-of-band placement test before the next major send:

  • New sending IP or change of IP pool. Reputation does not transfer; even a known-good ESP IP can be different from your prior IP at provider X.
  • DKIM key rotation or new DKIM domain. Verify the new key is published and selected before sending.
  • From-domain change (including subdomain shifts). New From-domain has zero history at every provider on day one.
  • SPF or DMARC policy change. Tightening to p=quarantine or p=reject can surface previously-tolerated alignment gaps as Spam placements.
  • Template overhaul. Major HTML structure change can cross a content-classifier threshold. Test once before broadcasting at full volume.
  • Switching ESP or adding a new sending stream. New transactional flow, new RPA flow, new ESP — all change the IP/auth/template surface enough to need a fresh read.
  • Recovery from a complaint spike or list hygiene event. Re-test after warmup to confirm reputation recovered.

Treat one-provider failures as canaries

Single-provider regressions are rarely isolated. A drop at Mail.ru tends to precede a drop at Yandex by days; a drop at T-Online often precedes drops at GMX. The filter operators share neither code nor reputation data, but they often react to the same underlying cause (a complaint spike, a content change, an IP-level signal).

  • One-provider Spam: escalate test cadence to daily for the next 1-2 weeks across all providers.
  • Two-provider Spam: stop scaling. Diagnose before sending more volume.
  • Three+ provider Spam: systemic problem. Pause new acquisition sends; focus on engaged-only mail while you diagnose and recover.

What to actually look at in each test

  • Folder placement per provider (Inbox / Spam / Promotions / Other / Junk). The headline number.
  • Authentication-Results headers per provider. Catch alignment regressions before they show up in placement.
  • Postmaster reputation where available (Gmail, Mail.ru, Yandex, SNDS for Microsoft IPs).
  • Per-provider trend across at least the last 3-4 tests. A single test is a snapshot; the trend is the actionable signal.

Cost vs. benefit

A free placement test takes 60 seconds and produces actionable per-provider placement plus parsed authentication results. Even at high volume, the cost is operational time — far less than the cost of a campaign that lands in Spam at one provider unnoticed.

For senders who want a fixed-cost, no-friction routine: send through our free inbox placement test on the same day each week, save the report URL, and review the trend. The same nine seed mailboxes every time gives you a comparable signal across runs.

Is daily testing wasteful at low volume?

Yes. Below ~10K/day the per-provider sample size in your real campaigns is small, and the test signal will move faster than your audience signal. Monthly + on-change is the right cadence.

Do I need separate tests for transactional and marketing streams?

If they use different IPs, DKIM domains, or From addresses, yes. Each independent stream has its own reputation. Testing one tells you nothing about the other.

Is provider variation enough that I need to test all 9?

If your audience touches all of them, yes. A sender on a single test that only checks Gmail is missing systematically — different providers fail in different ways and a single-provider test cannot diagnose a cross-provider pattern.
Related reading
Found this useful? Share it
AB
About the author
Artem Berezin
B2B Deliverability Specialist

B2B deliverability specialist with 5+ years of hands-on outreach experience. Built campaigns reaching 90,000+ inboxes across 20+ countries — and fixed the deliverability problems that came with that scale.

Check your deliverability across 20+ providers

Gmail, Outlook, Yahoo, Mail.ru, Yandex, GMX, ProtonMail and more. Real inbox screenshots, SPF/DKIM/DMARC, spam engine verdicts. Free, no signup.

Run Free Test →

Unlimited tests · 20+ seed mailboxes · Live results · No account required