Most brands cannot answer a basic question about counterfeits on US marketplaces: what is the actual revenue impact this quarter, and how confident are we in that number. The default answers tend to be either alarmist or dismissive, and neither holds up to a serious budgeting conversation. The methodology below is the one we use to produce a defensible figure.
Why most counterfeit cost figures are not credible
Two failure modes are common. The first is to count every counterfeit unit sold as a lost authorized sale, which assumes one-for-one substitution and almost always overstates the impact. The second is to count only confirmed customer complaints, which captures a tiny fraction of the actual leakage and almost always understates it.
A useful estimate has to account for several mechanisms at once: direct substitution, where a buyer chooses a counterfeit instead of an authorized item; price suppression, where authorized sellers discount to compete with fake inventory; brand-equity decay, where dissatisfied counterfeit buyers stop purchasing the brand at all; and channel distortion, where authorized retailers reduce shelf space because their margins are eroding.
Each of these has a different signal source and a different confidence interval. Treating them as a single number is what produces the unhelpful headline figures the industry has rightly grown skeptical of.
A workable measurement framework
The framework we use has four inputs. The first is a counterfeit-listing inventory across priority surfaces, deduplicated by seller fingerprint and weighted by estimated visibility (search rank, traffic share, sponsored placement). Listing counts alone are misleading; visibility is what determines exposure.
The second is a substitution-rate model that varies by product category. For products where the authentic article is freely available and price differentiation is small, substitution rates are low. For products where the authentic article is supply-constrained or the price gap is large, substitution rates can be meaningfully higher. We bracket these as ranges, not point estimates.
The third is a price-suppression measurement, taken by tracking authorized seller pricing on the same SKUs over time and comparing it against periods when counterfeit pressure was lower. The fourth is a customer-experience signal: support contacts, return reasons, and review sentiment that explicitly mentions authenticity.
Illustrative numbers from a recent engagement
These figures are composite, drawn from how mid-to-large consumer brands typically present once the framework is applied. Direct substitution losses on a single accessory category came out to roughly 4 to 6 percent of category revenue, well below the often-quoted double-digit figures but still a defensible eight-figure annual number for a brand at that scale.
Price suppression added another estimated 2 to 3 percent of category revenue, surfaced by comparing authorized seller margins on the affected SKUs against historically similar SKUs without comparable counterfeit pressure. This was the line item that surprised the commercial team the most, because it had been entirely invisible to them before.
Brand-equity decay was the hardest input to quantify. Using post-purchase survey data and repeat-purchase rates among customers who had reported counterfeit experiences, the team modeled a multi-year revenue impact in the low single-digit percent range. It was deliberately presented as a range with explicit assumptions, not a single number.
What the framework changes operationally
Once a brand has a defensible cost figure, the conversation about enforcement budget changes shape. Instead of arguing about whether brand protection is over- or under-resourced in absolute terms, leaders can ask whether the program's cost is justified by the revenue at risk in each category, and where the marginal dollar should go.
The framework also exposes priorities that purely volume-based reporting misses. A category with a small number of high-visibility counterfeit listings can outweigh a category with a large number of low-visibility ones, and a category showing measurable price suppression can be more urgent than one showing higher raw listing counts.
Where the methodology is weakest
Two areas remain genuinely difficult. The first is causal attribution for price suppression: it is hard to fully separate counterfeit pressure from broader category dynamics, promotional cycles, and macroeconomic effects. The honest answer is to present a range and to disclose the confounding factors, not to claim precision the data cannot support.
The second is brand-equity decay, which is a multi-year effect modeled from short-window data. We have found it more useful to express this as a sensitivity analysis (best case, base case, downside) than as a single forecast. Leaders generally make better decisions with an honest range than with a precise-looking number whose confidence is much lower than it appears.
A practical starting point
Brands that want a credible number this quarter, rather than next year, can start with two of the four inputs: a visibility-weighted listing inventory and a category-level substitution range. This will not capture the full picture, but it is enough to move beyond the unhelpful extremes and into a budgeting conversation grounded in evidence.
From there, price-suppression measurement is the next-highest-leverage addition, because it is usually the largest invisible cost and it draws the commercial organization into the conversation in a way that pure listing counts never do.
A defensible counterfeit-cost figure is less about precision and more about transparency: clear inputs, explicit ranges, and a methodology that survives questioning from finance and commercial leaders. Brands that invest in the measurement framework first tend to make better enforcement decisions afterwards.



