Every attribution model has the same fundamental limitation: it measures correlation, not causation. A channel that shows up on the path to every conversion might look like a critical contributor. But if you turned it off tomorrow, would fewer deals close? Or would those buyers have found you anyway through a different path?
That's the question incrementality testing answers. It's the only measurement methodology that actually tells you whether your spending is causing conversions, not just correlating with them.
Most growth teams understand this. They also don't run incrementality tests. Here's why that gap exists and what to do about it.
The Core Concept: Holdout Groups
Incrementality testing is structurally simple, even if execution gets complicated. You split your audience into two groups: a treatment group that sees your ads normally, and a holdout group that doesn't see your ads at all. You then compare conversion rates between the two groups over the test period.
The difference in conversion rate between treatment and holdout is your incremental lift. If your treatment group converts at 4.2% and your holdout converts at 3.6%, your campaign is producing roughly 0.6 percentage points of incremental conversion. That's the true value of the campaign - conversions that happened because of the advertising, not despite it or independently of it.
The math is that clean. The execution is harder.
Why Teams Don't Run These Tests
The objections are legitimate. Running a proper holdout test means intentionally withholding ads from a segment of your audience for the duration of the test. If the campaign is profitable, you're leaving money on the table during the test window. For a 30-day test on a channel spending $50K per month, you might deliberately sacrifice $15-20K in revenue to get statistically valid data.
Most CMOs won't approve that. Most finance teams won't fund it. So it doesn't happen.
The second problem is statistical rigor. You need large enough audiences in both groups to achieve statistical significance. For lower-volume B2B conversion events - demo bookings, trial sign-ups - you often can't get a clean result in a reasonable time window without either a very large audience or a very long test period.
The Cheaper Version: Geo-Based Tests
You don't have to run a full audience-split holdout to get useful incrementality signal. Geo-holdout tests let you turn off a channel in specific geographic regions while running normally in comparable regions, then compare conversion rates between the test and control geos.
This approach costs less in lost revenue (you're only holding back spend in some geos, not across your whole audience), and it sidesteps some of the user-level targeting restrictions that make audience splits hard in privacy-sensitive environments.
The caveat is that geographic comparisons introduce confounders that audience splits don't have. Markets differ in ways that have nothing to do with your advertising. You need to pick control geos that are as similar as possible to your test geos on the variables that matter: industry mix, average deal size, competitive landscape, and seasonal patterns.
What You Actually Learn
Well-run incrementality tests have produced some uncomfortable discoveries for growth teams.
Branded search campaigns - the ones that show 8-12x ROAS in platform reporting - frequently show near-zero incremental lift. The people clicking your branded search ads were going to find you anyway. You're paying to capture demand that already existed, not creating new demand. For companies spending $10-20K per month on branded keywords, that's often money that could be redeployed with much higher true ROI.
Retargeting shows a similar pattern in many B2B markets. The accounts you're retargeting were already in your pipeline. The retargeting ads capture credit for conversions that the sales process was going to close regardless. Incremental lift from retargeting is often 30-50% lower than platform attribution numbers suggest.
Every attribution model assumes your ads are doing something. Incrementality testing actually checks that assumption. The results are frequently surprising and occasionally brutal.
How to Run a Practical Version on a Limited Budget
You don't need a data science team to run a useful incrementality test. You need a clear hypothesis, a clean holdout, and enough volume to be meaningful.
Start with branded search. It's the easiest test to run because you can set up a campaign-level holdout, the conversion volume is usually high enough to generate signal quickly, and the financial risk is low (you can always restore the campaigns). Pause your branded search campaigns for 14-21 days in one region or for a test cohort and watch whether organic brand conversions pick up the slack.
If they do - if people who searched your brand name and would have clicked your paid ad instead clicked an organic listing - your branded paid search ROAS was inflated. You can reallocate that budget somewhere with genuine incremental impact.
Incrementality as a Validation Layer
The teams running attribution at the highest level use incrementality tests to validate the conclusions from their attribution models. If your attribution model says LinkedIn is driving 22% of pipeline and an incrementality test shows LinkedIn has strong lift in geos where you run it versus geos where you don't, those two data points reinforce each other.
That combination is much more credible in a budget review than either data point alone. Attribution tells the story. Incrementality tests verify it. The marketing teams who can walk into a CFO conversation with both have a fundamentally different relationship with their finance partner than the teams who can only show last-click ROAS numbers.
It takes more upfront work. The teams doing it will tell you it's worth it every time.
Build a measurement framework that validates itself
Attribify connects attribution data with holdout test results so you can see what's actually driving conversions.
Start Free Trial