For enterprise ecommerce companies, the pandemic has set in motion an online trajectory that continues to produce more orders and more revenue.
It has also increased the amount of fraud. But fighting fraud has changed: Because brand-new online shoppers don’t behave like typical customers, it can be more difficult to distinguish them from fraudsters.
Most large online retailers have a dedicated fraud prevention team with anywhere from two to 10 employees who handle hundreds of thousands of orders each week.
As much as these enterprise companies want zero fraud, it’s simply not realistic. The only way to prevent every fraudulent order would be to manually review every order – which is unrealistic even for modestly-sized businesses. Few companies have the resources it would take to review that many ecommerce orders on an ongoing basis, even if they “borrowed” staff from other departments (a common practice during peak seasons).
So, manually reviewing every order is impractical. However, a completely automated solution also poses problems and presents risks.
Completely automated fraud solutions rely on rules and filters that will typically flag anything even remotely suspicious as fraud. This results in lower approval rates, false declines, and major problems with customer satisfaction and reputation management.
Today’s consumers are far savvier online than ever before, especially younger consumers. They know ecommerce and they expect it to work for them. So when their valid order is declined, they tend to respond in devastating ways:
That’s alarming enough already. Now, consider that 90% of declined orders are not fraud. This means companies that trust completely automated fraud prevention solutions are almost certainly losing sales and angering their customers.
Some retailers try to simplify fraud prevention with lists of customers to allow or deny. These may be VIP customers or company executives who are supposed to get special treatment and have their orders automatically processed. Or these may be customers who are notorious for returning purchases in poor condition.
Either way, these lists are problematic. What will happen if a fraudster gains access to the credit card information for anyone on an allow list? The fraudster gets to be a kid in a candy store with no safeguards.
|
Deny lists circumvent the entire fraud prevention process. |
This is because allow and deny lists circumvent the entire fraud prevention process. Those orders are processed and approved outside of a fraud prevention database. So, your fraud protection team won’t have the opportunity to pre-emptively detect that a fraudster has stolen a VIP’s credit card and is using it to make high-dollar purchases.
The best fraud prevention solution is one that incorporates both automated and secondary reviews. Simply put, retailers need a blended model to get the most fraud prevention.
And that’s where most enterprise companies get stuck. They don’t know how to create a balanced approach that is the most efficient and effective solution.
Fortunately, at ClearSale, that’s what we know how to do best.
Our process is based on best practices, industry intelligence, and fraud experience across industries, markets and order types. If a fraud scheme has been attempted, we’ve seen it and have learned how to recognize it.
In fact, we’ve developed a statistical model that uses over 70 constructed variables with more than 300 possible fraud categories and scores. This model covers our entire database of historical orders. It involves a level of rigor that has scored our statistical model scores higher on the KS test than credit models. And it allows us to automatically approve very large numbers of orders.
When it comes to identifying which orders should be reviewed manually, our system flags potential fraud and we determine whether it makes sense to further investigate the order.
To help companies better understand how to make data-driven approval decisions and increase revenue, we’re showing you what’s behind the curtain and revealing how our blended model works.
There are a number of calculations that help determine which orders need to undergo secondary review and which should be automatically reviewed. Together, they determine a cutoff point or curve like the one plotted on this graph.
Let’s look at each of the calculations:
Fraud probability is a score assigned to order values based on past experience. Companies need to have insights about their order history to make this determination. For example, you may know that for every 100 orders worth $450,000, one of them leads to a chargeback. That means the fraud probability for a $450,000 order is 2%.
The expected loss is the product of fraud probability and the total value of your order.
Expected Loss = probability of fraud x total order value
So, for that same order value of $450,000, the expected loss is $9,000.
$9,000 = 0.02 x $450,000
If the cost per order for a secondary review is less than $9,000, you may want to manually review orders of that value.
The automatic approval cutoff point is a variable based on the expected loss calculations for the range of total order values. Taking the above example, you would calculate the expected loss for a series of total order values.
Let’s say you also make the calculation for orders valued at $100,000 and those valued at $1,000,000, giving you a range or total order values. The fraud probability for each, again, would be based on your experience with that order value and the expected loss would be calculated for each, as demonstrated in the table below.
Total Order Value |
Fraud Probability |
Expected Loss |
$100,000 |
.06 |
$6,000 |
$450,000 |
.02 |
$9,000 |
$1,000,000 |
.01 |
$10,000 |
Plot the expected loss calculations for each order value on a graph and you’ve created your approval cutoff point, Any order that falls below the curve should be automatically processed – and any order that is above the curve should undergo a secondary review.
This model isn’t infallible. There is the risk that automatically approved orders turn out to be false positives and orders that are automatically declined are valid.
For secondary review orders, there is a big risk of cancellation. When an order is flagged for secondary review, a group of analysts check the data against credit bureaus and other sources to determine if the orders are valid. They may even contact customers directly to confirm their information and let the card owner know about the purchase. Some customers take the opportunity to back out of the purchase. In fact, every 1% of additional manual analysis results in an estimated 0.0115% sales canceled.
To mitigate those risks, we validate our model before implementing it using historical data.
Validating this model involves back-testing where we take a random sample of previously approved orders and re-run them through the model to see what happens. Keep in mind, we already know which of the orders were fraudulent and resulted in chargebacks.
Then we compare the model results against the actual order outcomes. If they match or are close, we know the automatic approval cutoff curve is valid and the model is valid. If results are too disparate, we go back to the start and look at which calculation may be off.
We can also use this data to optimize the automatic approval cutoff and lower operational costs.
Along every automatic approval cutoff curve is a “sweet spot” where you will get the most overall value from the process. To find it, you’ll need to adjust the automatic approval cutoff curve to be as close as possible to an expected loss curve for the same order values. This way, whether orders are automatically reviewed or not, the cost is the same.
Other considerations that impact your automatic approval cutoff curve include:
Using a blended model for order review is a complicated process that requires expertise and analysis. It can fluctuate with peak times, historical insight and changing operational goals. The key is to adjust the variables to deliver the most value.