Incrementality testing has become the single most important measurement discipline for retail and QSR marketers. CFOs are done accepting vanity metrics. With tighter budgets, estimating the value of marketing technology requires measuring not only operational savings but the incremental revenue it generates.
That million-dollar campaign that drove a surge in transactions? Some of those customers might have walked through the door without it.
According to eMarketer, 71% of advertisers now rank incrementality as the most important KPI in the boom of retail and commerce media. The industry is waking up: attribution is great, but it only tells you who converted. Incrementality tells you why, and whether your marketing had anything to do with it.
This article covers:
- How incrementality testing works
- The strategic value it unlocks
- Where real merchants are using it today (with results you can benchmark against)
Core Framework: How Incrementality Testing Actually Works
At its core, incrementality testing answers one question:
Did this marketing activity cause a change in customer behavior, or would that behavior have happened regardless?
Figuring that out involves splitting your target audience into two groups:
- The test group, which sees your campaign (ad, the offer, the push notification, whatever it is).
- The control group, which sees nothing.
Then you compare outcomes between the two. It sounds simple, conceptually. But the distinction between this approach and what most marketers currently rely on is massive.
Attribution vs. Incrementality
You might be thinking you’re already doing a good job figuring out which campaigns produced which results with your attribution strategy. While that’s important for you and your team getting credit and for future planning, it’s not actually telling you whether your campaign truly worked.
As a member of Bain's Advisor Network puts it:
“There’s a fundamental difference between marketing attribution and incrementality. To me, being able to measure something is attribution. Understanding how it changes consumer behavior is incrementality. Often those terms are used interchangeably, but they’re really, really different.”
Last-click attribution overvalues the final touchpoint. Multi-touch attribution estimates incremental impact through models rather than direct measurement. Marketing mix models use statistical correlations but can't track individual behaviors.
These are all useful inputs, of course. But none can establish the causal relationship between your campaign and a purchase. That’s what incrementality testing does.
Where A/B Testing Fits In
A/B testing is another metric marketers sometimes confuse with incrementality: both use randomized groups, both compare outcomes. But they’re asking fundamentally different questions.
An A/B test compares two versions of something within an audience that’s already being exposed to your marketing. Everyone in the test sees something, you're just figuring out which version performs better. Subject line A vs. subject line B.
It tells you how to do your marketing more effectively, but it can’t tell you whether the marketing itself is driving new behavior.
Incrementality testing, by contrast, asks “does any of this work at all?” That’s a much harder question, and you get a more valuable answer for your budget.

Not Every Incrementality Test Looks the Same
The methodology you choose depends on your channels, your scale, and what you’re trying to learn.
- Audience-based holdout tests are the gold standard. You randomly divide your audience into test and control segments (typically 80/20 or 80/10/10) and expose only the test group to your campaign. This works especially well for digital channels and card-linked offer platforms where you can precisely control exposure.
- Geo-based tests compare performance across matched geographic regions. Ideal for omnichannel retailers who need to measure combined digital and in-store impact.
- Time-based tests use on/off windows (running a campaign, pausing it, and comparing performance). This is the simplest approach to execute but is also the noisiest, since external factors can muddy results.
What This Looks Like in Practice
Consider a QSR brand that wants to test whether a cash back promotion actually drives new visits or just rewards people who were already going to stop by.
The brand runs a promotion offering 10% cash back through a reward demand platform, with a push notification to eligible users.
But instead of blasting the offer to everyone, the marketing team randomly withholds it from 20% of the audience as a control group.
After two weeks, the test group shows a 12% higher visit rate and an 8% higher average order value. The control group’s behavior stays flat. That gap between the two groups (in visit rate and AOV) is the incremental lift. Those visits happened because of the promo.
The Strategic Value of Incrementality Testing: What it Gets You
Understanding causation changes how you spend money, how you report results, and how seriously the C-suite takes your marketing function.
McKinsey’s 2025 research on personalized marketing states it directly: “To validate the ROI of personalization efforts, rigorous incrementality testing, standardized performance metrics, and measurement playbooks are essential.”
In other words, your targeting sophistication doesn’t really mean anything if you can’t prove it works.
Here’s a quick table to show just how much value incrementality testing can have:

Brands running structured incrementality programs report iROAS figures from 7.5:1 to 11:1 in cash back offer campaigns. One health and wellness retailer Kard worked with generated over 10x iROAS targeting existing active customers, proving even loyal shoppers spend more with the right offer. Another achieved roughly $5 incremental CPA while generating significant new and reactivated customer revenue.
Incrementality testing helps you show that for every dollar you spent on a campaign, you generated $X that would not have existed otherwise. Hard for finance teams to argue with that.
Note: Incrementality Testing Does Not Replace Attribution or Marketing Mix Modeling
It completes them. Attribution shows what's happening across touchpoints, MMM gives you a macro view of channel efficiency, and incrementality testing tells you what’s real.
The smartest merchants combine all three:
- Planning with MMM
- Optimizing with attribution
- Validating with incrementality
As cookies disappear and walled gardens expand, first-party transaction data and randomized experiments become even more valuable — they don’t depend on cookies or device IDs, they depend on math.
Where You’d Actually Use Incrementality Testing
Commerce media, advertising that closes the loop between impressions and transactions, is reshaping digital advertising. McKinsey estimates the category could generate $1.3 trillion in enterprise value, with more than 15 retail media networks launching in the U.S. in the past two years alone. The category is expected to deliver over $100 billion in revenue to American companies by 2026.
What makes commerce media different is the closed-loop feedback.
Marketers connect ad spend directly to verified purchases. But that transparency also raises the bar. Brand partners start asking: How much of this would have happened anyway? That question is why incrementality testing has become table stakes. Brands now require incrementality proof before committing significant budgets to any campaign, but especially commerce media.
Retail Merchants
A global electronics brand ran a Q4 holiday campaign through a card-linked offers platform, offering new customers 7% cash back on online purchases.
The campaign generated over 19 million impressions, with 70% of the audience in the Gen Z and Millennial demographic.
Compared to a control group, average order value jumped 166% and average spend rose 43%, at a 7.5:1 incremental ROAS during one of the year’s most competitive shopping windows. Because the control group was built in, each metric reflects some form of incremental impact.
QSR Brands
Cicis Pizza partnered with Kard, a commerce media platform, for a year-long always-on cash back campaign, serving offers inside banking apps popular with younger consumers. The results:
- $2.7 million in attributed sales
- 2,500 weekly redemptions, with 72% of those redemptions from first-time diners
The remaining 28% came from existing and lapsed customers. Cicis achieved an 11:1 topline ROAS from the campaign.
6 Best Practices for Incrementality Testing
Running a test that produces trustworthy, actionable results takes discipline. Here’s how to go about it:
1. Start with Clear Objectives
Define what you’re measuring before you design a test. Sales lift? New customer acquisition? Visit frequency?
The KPI determines your methodology, sample size, and duration.
2. Choose the Right Methodology
The IAB Europe and IAB U.S. jointly released 2025 guidelines identifying four primary approaches: experiment-based, model-based counterfactual, econometric, and hybrid proxy.
Audience-based holdouts tend to be the most precise for digital and cash back channels. Geo-based tests suit omnichannel campaigns, particularly those that end in store. Start with what fits your current capabilities and build from there.
3. Design for Statistical Power
An underpowered test gives you false confidence. Use sufficient sample sizes and run tests long enough to capture full purchase cycles (two to four weeks minimum for retail and QSR).
Maintain a control group of at least 10 to 20%. Target 95% statistical confidence before making budget decisions.
4. Know the Difference Between Lift and Profit
Incremental lift tells you how much additional behavior your campaign drove. Incremental profit tells you whether that lift exceeds the cost of generating it. A campaign can show impressive lift and still lose money, so ideally, you calculate both to make sure you’re still on the right track.
Incremental Lift
How to calculate it: ((Test Group Conversion Rate − Control Group Conversion Rate) / Control Group Conversion Rate) x 100
If your test group has a 15% visit rate and your control group has a 10% visit rate: (15% - 10%) / 10% = 50% incremental lift. Keep in mind, lift alone doesn’t tell you whether the campaign was worth running.
Incremental ROAS (iROAS)
How to calculate it: (Incremental Revenue / Campaign Cost)
Where incremental revenue = revenue from the test group minus revenue from the control group, scaled to account for differences in group size.
So if your test group generated $500K, your scaled control group would have generated $350K, and the campaign cost $20K: ($500K − $350K) / $20K = 7.5:1 iROAS.
Incremental CPA (iCPA)
How to calculate it: (Campaign Cost / Incremental Conversions)
If the campaign cost $20K and drove 4,000 conversions that wouldn’t have happened otherwise: $20K / 4,000 = $5 iCPA.
A campaign with 50% incremental lift sounds impressive, but if the revenue from that lift is $15K and the campaign cost you $20K, you actually lost money: your iROAS is 0.75:1.
5. Avoid Common Pitfalls
These include:
- Underpowered sample sizes that can’t detect real differences. For example, If you’re running a cash back offer to 500 people with a 10/90 control/test split, your control group of 50 isn’t large enough to draw any meaningful conclusions.
- Mismatched control groups that don’t reflect your test group’s demographics and behaviors. If your test group skews toward high-frequency buyers in urban markets and your control group is mostly rural, infrequent shoppers, the difference in outcomes could just reflect the audience mismatch, not your campaign impact.
- Insufficient duration that captures snapshots rather than patterns.
- Cross-channel contamination where the control group gets exposed through a different channel. If you withhold a push notification from your control group but they see the same promotion on Instagram or through an in-store display, your control is not a control anymore.
6. Feed Results into Your Budget Workflow
Build a process where results flow directly into your next budget cycle. Rank channels by true incremental value. Share findings with finance in their language — cost per incremental acquisition, payback period.
Start Proving Your Marketing Works With Incrementality Testing
The era of taking attribution dashboards at face value is ending. Retail and QSR marketers who adopt incrementality testing gain:
- Certainty that their budgets drive genuine growth
- Confidence to defend every dollar to their CFO
- Assurance that when they scale a campaign, they’re scaling something that works
Not sure where to start with incrementality testing?
Some reward demand platforms, like Kard, bake incrementality testing into their cash back offer campaigns to prove that hyperpersonalized offers scale customer acquisition.
Want to see it in action? See how it works →
FAQS About Incrementality Testing
What is the minimum investment to start incrementality testing?
You don’t need a massive budget. The best cash back offer platforms handle the experimental design (control groups, randomization, lift measurement) as part of their standard campaign infrastructure. And with a platform like Kard, you only pay for performance (i.e., when a customer makes a purchase using your offer). The primary investment is campaign spend itself. Start with a single channel, allocate enough budget to reach a statistically significant audience, and build from there.
Which methodology should retail and QSR brands use first?
Audience-based holdout tests. They work natively within digital and reward demand channels, and produce clean results. If you have a significant in-store component that can’t be controlled at the individual level, geo-based testing is a strong alternative.
What ROAS improvement can we expect?
Our case studies show incremental ROAS from 7.5:1 to 11:1. The improvement comes not just from running better campaigns, it comes from stopping underperforming ones. When you see which promotions drive real incremental revenue versus capturing existing demand, you shift budget away from waste.


