May 8, 2026
Most discount fraud detection lives inside a single code. A merchant flags a coupon, a customer redeems it, and the system watches for that same customer redeeming the same coupon again. That model handles the obvious case well — one code, one rule, one customer trying to claim it twice.
It does not handle what this case study is about.
The brand in this case study runs a heavy influencer program. They have dozens of affiliate creators, each with a unique discount code. A customer using INFLUENCER_A_15 gets 15% off their first order, and the influencer behind that code earns a commission on the sale. Standard structure for a creator-led acquisition channel. Alongside that, the brand also runs a generic new-customer discount available on the site for visitors who arrive without a creator code.
Every individual discount in this setup is well-protected. Each code only allows a single redemption per email address. The new-customer discount is restricted to first orders. Their existing fraud tools watch every code for repeat email patterns and flag duplicate redemptions on the same code.
The problem was not happening inside any of those rules. It was happening across them.
The merchant first noticed the issue when they ran a routine reconciliation between the affiliate platform and Shopify. They were tracking commissions paid against orders fulfilled, looking for the usual things — chargebacks, refunds, returns that should claw back commission. What surfaced instead was a different shape of loss.
For a meaningful share of "first-time customer" orders, the same customer — same shipping address, same phone, often a clearly-related email — had also placed a prior "first-time customer" order using a different influencer's code. Or had used a creator code on order one and then come back as a "new customer" using the on-site discount on order two. Or had cycled through three influencers across three orders, each one technically a first-time redemption of the code in question, none of them actually a new customer.
In every case, the brand had paid:
The marketing stack saw two new customers. The Shopify discount system saw two clean redemptions. The fraud detection tool saw two unrelated orders on two unrelated codes. The actual situation — one customer extracting two acquisition incentives — was invisible to every system involved.
This is a structural limitation, not a configuration mistake. Discount fraud detection — including, until this project, CustomerGenius — typically scopes its scoring to the code being redeemed. The merchant flags INFLUENCER_A_15, and the system compares the order using INFLUENCER_A_15 against prior orders that also used INFLUENCER_A_15. It does not compare against orders that used INFLUENCER_B_15. From the system's perspective, those are different promotions.
For most discount structures that scoping is correct. Two coupon codes for two different campaigns are usually unrelated, and a customer using both does not necessarily indicate fraud. A wholesale customer might use a B2B code on one order and a holiday code on another. A loyal repeat customer might catch a Black Friday sale and then a New Year's promotion. Treating those as suspicious would generate noise, not signal.
The case where it breaks is the one this brand was running: a portfolio of codes that all serve the same purpose. Every influencer code is a first-time customer discount. The on-site new-customer code is a first-time customer discount. They are economically identical promotions distributed through different channels. A customer is supposed to be eligible for exactly one of them — not all of them in sequence.
The merchant did not have a way to express that eligibility rule. Shopify's discount system did not know the codes were related. The fraud detection tool watched each one independently. Every redemption looked legitimate within its own scope, even when the cumulative pattern was not.
The conversation that started this project was specific. The merchant came to us and described the situation in roughly these terms:
> "We have a group of discounts that all do the same thing. They are all introductory offers — some belong to specific creators, some are on-site. We do not want a single customer to use one and then come back and use another. If they used INFLUENCER_A_15, they should not also be able to use INFLUENCER_B_15 a week later under a fresh email. We are paying two commissions and giving two discounts for one acquisition."
The ask was for cross-evaluation. Not blocking individual codes from being reused — that already worked. Treating a defined group of codes as a single eligibility pool, where any usage of any code in the group counted as having claimed the offer, regardless of which specific code was redeemed.
This was not a feature we had in the product. It also did not exist anywhere in the discount fraud tools we surveyed. It was a real gap.
CustomerGenius built cross-evaluation discount groups as a direct response. The merchant defines a group of discount codes that share an eligibility pool, and CustomerGenius scores incoming orders using any code in the group against prior orders using any other code in the group.
The configuration is intentionally simple. In the merchant's CustomerGenius dashboard, they create a discount group, give it a name, and add codes to it. For this brand, the first group looked roughly like:
- INFLUENCER_A_15
- INFLUENCER_B_15
- INFLUENCER_C_15
- ... (one entry per creator)
- WELCOME15 (the on-site new-customer code)
Once the group is defined, the scoring logic shifts. When an order comes in using any code in the group, CustomerGenius compares it against the merchant's full history of orders that used any code in the group — not just orders on the same code. The five-signal scoring runs identically:
If enough signals match a prior order from the group, the new order is flagged or auto-cancelled depending on the merchant's threshold settings. From the customer's perspective, attempting to use a different code from the group after already redeeming one looks indistinguishable from attempting to reuse the same code twice.
The "or any alias of the customer" part of the original ask is the part that makes the feature actually solve the problem.
A customer who has caught on to the loophole is, by definition, willing to create new email addresses to keep redeeming. If the cross-evaluation only matched on email, a fresh Gmail account would dodge the check entirely. The customer would still ship to the same apartment, pay with the same card, and use the same phone — but the email would be new, and the system would treat them as a separate person.
CustomerGenius's scoring is identity-based, not email-based. The same multi-signal logic that catches duplicate-email fraud on a single code applies inside a discount group. If a new email shows up at a previously-redeemed shipping address, with a previously-redeemed phone number, the system identifies the underlying customer and flags the order even though the email itself has never been seen before.
This is the part that makes the feature defensible against a determined abuser. The merchant is not asking "has this email used a code in this group" — they are asking "has this person, by any combination of identifiers we can compare, used a code in this group." The answer is what determines whether the order goes through.
The merchant moved through three stages of rollout, which is roughly the path we recommend for any new detection rule going live on real revenue.
Stage one — observation only. For the first ten days, cross-evaluation ran in scoring mode without any action attached. Every order using a code in the group was scored, the matches were logged, but no orders were cancelled. This let the merchant see the volume of cross-redemption patterns in their actual data without making any customer-visible changes.
The data validated the original hypothesis. Across the observation window, the merchant saw a meaningful share of orders using a creator code where the same customer — by address, phone, or fuzzy name match — had already redeemed a different code in the group within the prior 90 days.
Stage two — flag for review. The merchant moved cross-evaluation into a flag-and-review configuration. Orders matching the pattern were sent to a review queue inside the CustomerGenius dashboard. The customer service team had a daily window to walk through the queue, confirm the matches were correct, and decide on the action. For most matches, the action was a refund of the discount portion, with a note logged against the customer record. For a few edge cases — a clear household relationship, a shared business address — the team marked the orders as legitimate and they passed through.
This stage gave the team confidence in the precision of the match before handing the system autonomy.
Stage three — auto-cancel. After two weeks of flag-and-review, the false positive rate was low enough that the merchant moved cross-evaluation to auto-cancel for high-confidence matches (two or more signal matches) and kept the review queue for single-signal matches. From that point forward, discount stacking across the group resolved itself within seconds of an order being placed.
The most important number for this merchant was the change in legitimate-versus-extracted commission spend. Before cross-evaluation, a meaningful share of every month's affiliate commission was going to creators whose codes had been redeemed by customers who had already used a different creator's code on a prior order. That commission was, in practical terms, being paid twice for the same acquisition.
Within the first full month after auto-cancel went live, the merchant saw the share of duplicate-pattern orders in their commissioned data drop sharply. The orders that had been generating doubled-up commission stopped getting through. Total commission spend went down. Creator-attributed first-order volume — the legitimate kind, where the customer was actually a new customer — did not change in any meaningful way.
This is the outcome that matters. The fraud was sitting on top of a real acquisition channel. Removing the fraud did not damage the channel. It removed the noise that had been inflating it.
The merchant also reported a side benefit on the creator side. A few of the influencers in the program had been generating noticeably high "new customer" volume that was not converting into repeat orders. After cross-evaluation went live, the volume from those creators dropped to a level that matched their actual reach. The merchant could now distinguish between creators driving real new customers and creators whose codes were popular with stackers — a signal that had been completely buried before.
Cross-evaluation discount groups are designed for a specific situation: the merchant has multiple discount codes that serve the same business purpose, and a customer should only be eligible for one of them. The clearest examples are:
The shape of the rule is always the same. There is a portfolio of codes that look different from a marketing perspective but represent a single eligibility pool from a customer perspective. Cross-evaluation tells the system to enforce the eligibility pool, not just the individual codes.
It is worth being clear on where this should not be used. Some discount portfolios genuinely should allow stacking across codes:
Cross-evaluation is a hammer for cases where the merchant has decided that two codes are the same offer in different wrappers. It is not a default state — it is a deliberate configuration applied to specific groups.
The merchants who should use it are the ones who can identify the group up front. If the marketing logic is "all of these codes are the new-customer discount, just distributed differently," cross-evaluation enforces that logic at the order level.
The economic case for cross-evaluation is sharpest in influencer-led acquisition because the cost stack is so explicit. Every redemption involves both a discount given to the customer and a commission paid to the creator. When a customer cycles through creators, both costs duplicate, and neither cost was budgeted to be incurred twice for the same person.
The brand in this case study had built a sophisticated affiliate program. They had analytics on creator performance, attribution windows, commission tiers, and seasonal campaigns. What they did not have was a way to enforce that the program's first-order incentive was actually a one-time thing per customer. Cross-evaluation gave them that enforcement without changing any of the marketing structure on the surface.
For any brand running a substantial influencer or affiliate channel, the question worth asking is whether the codes in the program function as separate offers or as one offer distributed through multiple creators. In most influencer programs, it is the latter. And once that is true, cross-evaluation is the missing piece.
Discount fraud is most often described as a single-code problem: one customer, one coupon, repeated redemptions under different emails. That framing is correct as far as it goes, but it misses the larger pattern that emerges when a merchant operates a portfolio of codes that all gate the same offer. Per-code monitoring keeps each individual code clean. It does not keep the offer clean.
Cross-evaluation discount groups extend duplicate-detection logic from a single code to a defined pool of codes, and apply the same five-signal identity scoring inside that pool that CustomerGenius applies to individual codes. The result is that a customer cannot launder a second redemption of the same offer through a different code in the group — not by switching influencer, not by switching channel, not by switching email.
For the influencer-led brand in this case study, the change recovered duplicated commission spend, surfaced previously-hidden differences in creator quality, and did not affect legitimate new-customer volume. The discount portfolio kept doing what it was designed to do. It just stopped doing it twice for the same customer.
If your store runs multiple discount codes that all gate the same first-order or one-time offer — across influencers, channels, brand collaborations, or win-back campaigns — cross-evaluation discount groups close a gap that single-code fraud detection cannot reach. See how the feature fits into the broader fraud detection system on the CustomerGenius pricing page, or install CustomerGenius from the Shopify App Store for a 14-day free trial.
CustomerGenius automatically detects and refunds fraudulent discounted orders — starting at $9.99/month with a 14-day free trial.
Try CustomerGenius Free